Search Results

Search found 16001 results on 641 pages for 'convention over configuration'.

Page 578/641 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: <<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> </head> <body> <h1>Index of /</h1> <ul><li><a href="MyPageLinks/"> Links/</a></li> ..... .... </ul> </body></html> But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • VMWare Network bug in multiple VMWare Workstation versions if using a hardcoded IP address

    - by onyxruby
    I'm having a very tricky problem with some of my VM sessions being unable to reach the Internet or even ping the gateway. I have just set up a new VM Workstation (7) on a W2K8 64bit server (I'll be converting to ESXI 4 once I can find a decent book on it, so for the meanwhile I use workstation). I have imported a number of VM's and setup some new ones on the server.In short the problem with some of the VM's being unable to reach the Internet is that they can't reach the gateway. I've looking at a number of things and can pretty safely rule out the following: Switch, Router, DHCP Server, DNS, Client IP configuration, Routes and typos. The problem is that some of the new clients cannot reach the gateway if their IP address is hardcoded, they can't even ping it by IP address. That rules out DNS and DHCP. Now, if I allow them to get their IP address by DHCP they can reach the gateway and Internet without issue. The interesting thing on this, is that this behavior occurs even if I leave the DNS information hardcoded under TCP/IP settings. It doesn't work unless the IP and gateway are handed out by DHCP even though the same information IP info is being used by the host. Fundamentally from the standpoint of the clients, they are trying to reach the exact same gateway using the exact same IP information regardless of whether they are hardcoded or assigned by DHCP. Here's an example of one client. IP Address 192.168.7.66 - Subnet Mask 255.255.255.0 - Gateway 192.168.7.254 - DNS1 192.168.7.44 - DNS2 192.168.7.254. The issue occurs across six different microsoft operating systems, Windows 7 and Windows 2008 variants all have the issue. My W2K3, XP, Vista and W98 clients all work without issue with hardcoded IP addresses. I have tried things like rearranging the DNS order, flushing DNS and so on. It's not a routing or switch issue as the clients can work just fine if they get their IP by DHCP. It's not a paramater issue as the exact same paramaters are handed out by DHCP as I plug in by hand. It's not a DNS issue as clients cant reach other clients even with IP addresses only. I have run a tracert to the gateway by IP address and it times out on the very first hop before failing on hop3 with destination host unreachable. If I get the IP address by DHCP the tracert finds the gateway (and Internet) without issue. I have read a few other posts online in forums talking about this problem randomly occuring over the years in other VM versions as well, so I suspect some kind of long standing bug. Does anyone have any ideas on this? Is it possibly a bug with Windows 7 and W2K clients under VM?

    Read the article

  • Solaris ldap Authentication

    - by Tman
    Hi everyone Iv been having a trouble trying to get my Solaris 10 server to authenticate against an eDir server.im managed to Set up my linux(RHeL,SLES) servers to authenticate against the ldap Server.which works fine. Here is my configuration Files. ldapclient list: NS_LDAP_FILE_VERSION= 2.0 NS_LDAP_BINDDN= cn=proxyuser,o=AEDev NS_LDAP_BINDPASSWD= {NS1}ecfa88f3a945c22222233 NS_LDAP_SERVERS= 192.168.0.19 NS_LDAP_SEARCH_BASEDN= ou=auth,o=AEDev NS_LDAP_AUTH= simple NS_LDAP_SEARCH_SCOPE= sub NS_LDAP_CACHETTL= 0 NS_LDAP_CREDENTIAL_LEVEL= anonymous NS_LDAP_SERVICE_SEARCH_DESC= group:ou=Groups,ou=auth,o=AEDev NS_LDAP_SERVICE_SEARCH_DESC= shadow:ou=users,ou=auth,o=AEDev?sub?objectClass=shadowAccount NS_LDAP_SERVICE_SEARCH_DESC= passwd:ou=auth,o=AEDev?sub?objectClass=posixAccount NS_LDAP_BIND_TIME= 10 NS_LDAP_SERVICE_AUTH_METHOD= pam_ldap:simple getent passwd works fine: root:x:0:0:Super-User:/:/sbin/sh daemon:x:1:1::/: bin:x:2:2::/usr/bin: sys:x:3:3::/: adm:x:4:4:Admin:/var/adm: lp:x:71:8:Line Printer Admin:/usr/spool/lp: uucp:x:5:5:uucp Admin:/usr/lib/uucp: nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico smmsp:x:25:25:SendMail Message Submission Program:/: listen:x:37:4:Network Admin:/usr/net/nls: gdm:x:50:50:GDM Reserved UID:/: webservd:x:80:80:WebServer Reserved UID:/: postgres:x:90:90:PostgreSQL Reserved UID:/:/usr/bin/pfksh svctag:x:95:12:Service Tag UID:/: nobody:x:60001:60001:NFS Anonymous Access User:/: noaccess:x:60002:60002:No Access User:/: nobody4:x:65534:65534:SunOS 4.x NFS Anonymous Access User:/: tlla:x:2012:100::/home/tlla: test:x:2011:100::/home/test: thato:x:2010:100::/home/thato: pam.conf login auth sufficient pam_unix_auth.so.1 #server_policy login auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass login auth required pam_dial_auth.so.1 rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_authtok_get.so.1 rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth sufficient pam_unix_auth.so.1 rlogin auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 rsh auth sufficient pam_unix_auth.so.1 #server_policy rsh auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other auth requisite pam_authtok_get.so.1 other auth required pam_dhkeys.so.1 other auth required pam_unix_cred.so.1 other auth sufficient pam_unix_auth.so.1 other auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass passwd auth required pam_passwd_auth.so.1 passwd auth sufficient pam_unix_auth.so.1 ssh account sufficient pam_unix.so.1 ssh account sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other account requisite pam_roles.so.1 other account sufficient pam_unix_account.so.1 other account sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other password required pam_dhkeys.so.1 other password requisite pam_authtok_get.so.1 other password requisite pam_authtok_check.so.1 other password required pam_authtok_store.so.1 other password sufficient pam_unix.so.1 other password sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass Local Authentication Works But LDAP Authentication Doesn't Work.

    Read the article

  • Cannot exclude a path from basic auth when using a front controller script

    - by Adam Monsen
    I have a small PHP/Apache2 web application wherein I'd like to do two seemingly incompatible operations: Route all requests through a single PHP script (a "front controller", if you will) Secure everything except API calls with HTTP basic authentication I can satisfy either requirement just fine in isolation, it's when I try to do both at once that I am blocked. For no good reason I'm trying to accomplish these requirements solely with Apache configuration. Here are the requirements stated as an example. A GET request for this URL: http://basic/api/listcars?max=10 should be sent through front.php without requiring basic auth. front.php will get /api/listcars?max=10 and do whatever it needs to with that. Here's what I think should work. In my /etc/hosts I added 127.0.0.1 basic and I am using this Apache config: <Location /> AuthType Basic AuthName "Home Secure" AuthUserFile /etc/apache2/passwords require valid-user </Location> <VirtualHost *:80> ServerName basic DocumentRoot /var/www/basic <Directory /var/www/basic> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /front.php/$1 [QSA,L] </IfModule> </Directory> <Location /api> Order deny,allow Allow from all Satisfy any </Location> </VirtualHost> But I still always get a HTTP 401: Authorization Required response. I can make it work by changing <Location /api> into <Location ~ /api> but this allows more than I want to past basic auth. I also tried changing the <Directory /var/www/basic> section into <Location />, but this doesn't work either (and it results in some strange values for PATH_TRANSLATED being passed to the script). I searched around and found many examples of selective exclusion of basic auth, but none that also incorporated a front controller. I could certainly do something like handle basic auth in the front controller, but if I can have Apache do that instead I'll be able to keep all authentication logic out of my PHP code. A friend suggested splitting this into two vhosts, which I know also works. This used to be two separate vhosts, actually. I'm using Apache 2.2.22 / PHP 5.3.10 on Ubuntu 12.04.

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • Can't connect to DeploymentShare$ from PC attempting to MDT, but can other PCs on the network

    - by Moman10
    I am in the process of setting up MDT and have run across a problem. MDT is installed on a Windows 2012 server, MDT version 6.2.5019.0. Using WDS as well. Active Directory domain, the server is up to date and on the network. I boot up the PC, it gets an address from DHCP, pulls down the LiteTouchPE_x64.wim image and goes into the MS Solution Accelerators screen, the Processing Bootstrap Settings box comes up and processes for a couple of seconds, then goes away, it sits there for another minute or so and then gives the error: A connection to the deployment share (\\Acme-MDT\DeploymentShare$) could not be made. Can not reach the DeployRoot. Possible Cause: Network Routing error or Network Configuration Error." I can then retry or cancel. I have seen this error online but so far nothing that helps fix it, but seems to be an issue with the FQDN. I verified that I am getting an IP address and that I can successfully ping the MDT server if I use the FQDN, but can not just by it's A record of Acme-MDT. I tried manually mapping the network share using net use and it works if I use the FQDN, but it fails with an error code 53, "Network path not found" if I just use the A record of Acme-MDT. Here is the net use command I'm using: net use * \\Acme-MDT\DeploymentShare$ /u:Domain\Administrator It gives the error System Error 53, Network path not found (and doesn't prompt for a password), but if I use the FQDN of \\Acme-MDT.domain.com\DeploymentShare$ it works fine to map the drive. I guess the problem is, when it tries to load the image, it is trying to start from \\Acme-MDT\DeploymentShare$ and I need it to start from \\Acme-MDT.domain.com\DeploymentShare$, but not sure how to get it to do that. I've put the fully qualified path in CustomSettings.ini and bootstrap, updated the deployment share, regenerated the boot image and replaced the boot wim in WDS. Or, if someone has an idea as to why it's acting this way and knows a way around it. The end result is what matters! :) I did verify in DNS that Acme-MDT is there, with the proper IP, and I can successfully use the net use command to map this drive from a couple other computers that are already on the network. I am assuming it has something to do with that computer not already being part of the domain, but I'm honestly at a loss as to how to fix it. Any ideas are appreciated, thanks in advance for your help!

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • GRUB 2 freezing at OS selection screen, what could be the cause?

    - by Michael Kjörling
    Mains power is somewhat unreliable where I live, so every now and then, the computer gets rebooted when the PSU can't maintain proper voltage during a brown-out or momentary black-out. It's happened a few times recently that when power is restored, the BIOS POST completes successfully, GRUB starts to load and then freezes. I've seen this at the Welcome to GRUB! message, but it seems to happen more often just past the switch to the graphical OS list. At this point, the computer will not respond to anything (arrow keys, control commands, Ctrl+Alt+Del, ...) - it simply sits there displaying this image, seemingly doing nothing more. At that point, turning the computer off using the power button and letting it sit for a while (cooling down?) has allowed it to boot successfully. Turning the computer off and immediately back on seems to give the same result (successful POST then freeze in GRUB). This behavior began recently, although does not seem to be directly correlated with my hard disk woes (although it may be relevant that GRUB resides on that physical disk, I don't know). Once the computer has booted, it runs without a hitch. I know that a "proper" solution would be to invest in a UPS, but what might be causing behavior like this? I was thinking in terms of perhaps the CPU shutting down as a thermal control measure, but if that was the cause then wouldn't I see similar freezes during use (which I do not)? What else could cause freezes apparently closely but not perfectly related to the BIOS handover from POST to OS bootloader? The BIOS settings are to reset to previous power status after a power loss. Since the PC in question is almost always turned on, this means restore to full power status. I have no expansion cards installed that make any BIOS extensions known by screen output during the boot process, at least, but I do have a few expansion cards installed. Haven't made any changes in that regard in a long time, now. I haven't touched GRUB itself for a long time, whether configuration or binaries, so I don't think that's the problem. Also, it doesn't really make sense that a bug in GRUB would manifest itself only once in a blue moon but significantly more often after a power failure.

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • TPROXY Not working with HAProxy, Ubuntu 14.04

    - by Nyxynyx
    I'm trying to use HAProxy as a fully transparent proxy using TPROXY in Ubuntu 14.04. HAProxy will be setup on the first server with eth1 111.111.250.250 and eth0 10.111.128.134. The single balanced server has eth1 and eth0 as well. eth1 is the public facing network interface while eth0 is for the private network which both servers are in. Problem: I'm able to connect to the balanced server's port 1234 directly (via eth1) but am not able to reach the balanced server via Haproxy port 1234 (which redirects to 1234 via eth0). Am I missing out something in this configuration? On the HAProxy server The current kernel is: Linux extremehash-lb2 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux The kernel appears to have TPROXY support: # grep TPROXY /boot/config-3.13.0-24-generic CONFIG_NETFILTER_XT_TARGET_TPROXY=m HAProxy was compiled with TPROXY support: haproxy -vv HA-Proxy version 1.5.3 2014/07/25 Copyright 2000-2014 Willy Tarreau <[email protected]> Build options : TARGET = linux26 CPU = x86_64 CC = gcc CFLAGS = -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_STATIC_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built without zlib support (USE_ZLIB not set) Compression algorithms supported : identity Built without OpenSSL support (USE_OPENSSL not set) Built with PCRE version : 8.31 2012-07-06 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. In /etc/haproxy/haproxy.cfg, I've configured a port to have the following options: listen test1235 :1234 mode tcp option tcplog balance leastconn source 0.0.0.0 usesrc clientip server balanced1 10.111.163.76:1234 check inter 5s rise 2 fall 4 weight 4 On the balanced server In /etc/networking/interfaces I've set the gateway for eth0 to be the HAProxy box 10.111.128.134 and restarted networking. auto eth0 eth1 iface eth0 inet static address 111.111.250.250 netmask 255.255.224.0 gateway 111.131.224.1 dns-nameservers 8.8.4.4 8.8.8.8 209.244.0.3 iface eth1 inet static address 10.111.163.76 netmask 255.255.0.0 gateway 10.111.128.134 ip route gives: default via 111.111.224.1 dev eth0 10.111.0.0/16 dev eth1 proto kernel scope link src 10.111.163.76 111.111.224.0/19 dev eth0 proto kernel scope link src 111.111.250.250

    Read the article

  • NTPD seems to delete all network interfaces

    - by Aurelin
    We have a couple of virtual interfaces configured on eth0 on a CentOS, and every now and then, they went down seemingly out of the blue. Now after going through the log files, I found out that apparently ntpd deletes all eth0 interfaces, and that dhclient automatically brings eth0 back up. The virtual interfaces, however, stay down which causes several of our websites to be inaccessible. Can someone explain to me why ntpd deletes interfaces? Can / should that be turned off, or can / should I configure dhclient to bring the virtual interfaces back up automatically, too? EDIT// The log files that I should've posted : Nov 12 13:10:28 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6a825e97) Nov 12 13:10:42 raptor dhclient[20048]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPOFFER from 96.126.108.78 Nov 12 13:10:42 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPACK from 96.126.108.78 (xid=0x24554092) Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #31 eth0, 50.116.50.97#123, interface stats: received=3255, sent=3256, dropped=0, active_time=1559394 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #32 eth0:0, 50.116.53.56#123, interface stats: received=3, sent=0, dropped=0, active_time=1559391 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #33 eth0:1, 66.175.211.192#123, interface stats: received=2, sent=0, dropped=0, active_time=1559389 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #34 eth0:2, 50.116.53.95#123, interface stats: received=3, sent=0, dropped=0, active_time=1559387 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #35 eth0:3, 97.107.132.32#123, interface stats: received=2, sent=0, dropped=0, active_time=1559385 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #36 eth0:4, 50.116.56.201#123, interface stats: received=2, sent=0, dropped=0, active_time=1559383 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #37 eth0:5, 66.175.212.121#123, interface stats: received=2, sent=0, dropped=0, active_time=1559381 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #38 eth0:6, 66.175.215.137#123, interface stats: received=2, sent=0, dropped=0, active_time=1559379 secs Nov 12 13:10:44 raptor NET[1573]: /sbin/dhclient-script : updated /etc/resolv.conf Nov 12 13:10:44 raptor dhclient[20048]: bound to 50.116.50.97 -- renewal in 32692 seconds. Nov 12 13:10:45 raptor ntpd[2109]: Listening on interface #39 eth0, 50.116.50.97#123 Enabled The eth0 config : DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="no" IPADDR=50.116.50.97 NETMASK=255.255.255.0 GATEWAY=50.116.50.1 And the virtual interfaces (I posted the first one only, they look the same for the most part) : # Configuration for eth0:0 DEVICE=eth0:0 BOOTPROTO=none # This line ensures that the interface will be brought up during boot. ONBOOT=yes # eth0:0 IPADDR=50.116.53.56 NETMASK=255.255.255.0

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >