Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 811/854 | < Previous Page | 807 808 809 810 811 812 813 814 815 816 817 818  | Next Page >

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • postfix 5.7.1 Relay access denied when sending mail with cron

    - by zensys
    Reluctant to ask because there is so much here about 'postfix relay access denied' but I cannot find my case: I use php (Zend Framework) to send emails outside my network using the Google mail server because I could not send mail outside my server (user: web). However when I sent out an email via cron (user: root, I believe), still using ZF, using the same mail config/credentials, I get the message: '5.7.1 Relay access denied' I guess I need to know one of two things: 1. How can I use the google smtp server from cron 2. What do I need to change in my config to send mail using my own server instead of google Though the answer to 2. is the more structural solution I assume, I am quite happy with an answer to 1. as well because I think Google is better at server maintaince (security/spam) than I am. Below my ZF application.ini mail section, main.cf and master.cf: application.ini: resources.mail.transport.type = smtp resources.mail.transport.auth = login resources.mail.transport.host = "smtp.gmail.com" resources.mail.transport.ssl = tls resources.mail.transport.port = 587 resources.mail.transport.username = [email protected] resources.mail.transport.password = xxxxxxx resources.mail.defaultFrom.email = [email protected] resources.mail.defaultFrom.name = "my company" main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.second-start.nl mydomain = second-start.nl alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes # see under Spam smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps virtual_transport = dovecot dovecot_destination_recipient_limit = 1 # Spam disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, check_helo_access hash:/etc/postfix/helo_access, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_mynetworks, reject_non_fqdn_hostname, reject_rbl_client sbl.spamhaus.org, reject_rbl_client zen.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 master.cf: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • Arch Linux with an nginx/django setup refuses to display ANYTHING

    - by Holland
    I'm on Amazon Ec2, with an Arch Linux server. While I truly am loving it, I'm having the issue of actually getting nginx to display anything. Everytime I try to throw my hostname into the browser, the browser states that it's not available for some reason - almost as if the host doesn't even exist. One thing I'd like to know is, how can I get this up and running? Is there a specific arch linux configuration I have to do to make it web accessible? I have port 80 open, as well as port 22. I've tried using gunicorn, python-flup, and nginx. Nginx Config user http; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name _; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #charset koi8-r; location ^~ /media/ { root /path/to/media; } location ^~ /admin-media/ { root /usr/lib/python2.7/site-packages/django/contrib/admin/media; } location / { root /path/to/root/; fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_index index.html; index index.htm index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /etc/nginx/html/50x.html; } } # server { # listen 80; # server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; # location / { # root html; # index index.html index.htm; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { root html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} #} # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I can't quite tell if it's a server issue or a configuration issue: I've followed so many guides now I can't even count them all. The thing is that Django itself is working fine, and my permissions to the document root of the where the site files are stored is 777. Ontop of that, I have a git repo which works perfectly fine, and django, python, and runfcgi all start without issues. The same goes for gunicorn, when I do a gunicorn_django -b 0.0.0.0:8000 in my document root. Here is my output from that: 2012-04-15 05:17:37 [3124] [INFO] Starting gunicorn 0.14.2 2012-04-15 05:17:37 [3124] [INFO] Listening at: http://0.0.0.0:8081 (3124) 2012-04-15 05:17:37 [3124] [INFO] Using worker: sync 2012-04-15 05:17:37 [3127] [INFO] Booting worker with pid: 3127 As far as I know, everything seems fine, as well as error.log and access.log for nginx. The access log is completely blank, for that matter. I just feel lost here; what would be a step in the right direction to bebugging an issue such as this?

    Read the article

  • How to share internet over VPN and inside a virtual machine (Windows)?

    - by mountrix
    ` My final goal is to have a virtual machine at work in which anything that happen inside (tcp, udp, ping, ...) will use the Internet connection of a computer at home. So, if inside this VM should I open an Internet browser to a site such as "show my IP", my home IP should be printed. I am also looking for a way to debug/develop a software inside this VM, but I would like to tunnel only the connections of this software, not the full graphical interface, this is why a Remote Desktop solution won't fit me. The connection between the both computer should be secured somehow, like in a SSH tunnel. This ultimately should allow me to have a portable VM in which I can connect to whatever networks I have access at home, in a secure way. This is my configuration: At work, I have a LAN-connected desktop computer, with Windows 7 Professional Edition as a host [computer W] On this same computer, I have a Virtual Box machine running Windows XP [computer V] At home, I have a laptop computer, running Windows 7 Home Edition [computer H] This laptop is connected to a Livebox 2 broadband modem by Wifi. What I am trying to do is to sit at work in front of the virtual machine [V], and connect to a webpage as if the request was issued from the laptop [H] at home, and the data should be securely tunneled between the both. But if I am using internet directly inside [W], it should use the normal LAN interface at work. To achieve my goal, I first try using VPN, than SSH tunneling, without success. I first tried to install Teamviewer between [W] and [H]. This is working fine, I can send files, share desktop, etc. Teamviewer has a VPN mode that creates a new VPN network interface with its own IP, both on computer [W] and [H]. This allowed me to connect [H] as a network computer inside [W] and I was able to share files, but not to share Internet. At this point, I tried to use from [W] the Internet as if I was at home. I setup a route (using route add from command line in [W]) in order to instruct each packet going to a given website to pass by the new VPN interface on [W], with the hope it will be forwarded to [H], but the webpage was simply inaccessible. I then tried to setup a Windows VPN connection between [W] and [H], using the Windows 7 VPN feature. [H] was the server and [W] the client. But it failed: I got the "Unable to join a remote PC while trying to VPN" 720 Error when I was setting up the client on [W]. I think the problem is the Livebox 2 that could blocks the packets. But I am not sure of this: 1) with Teamviewer it works fine, 2) Livebox 2 has a configuration page for port mapping that gives the proper configuration to map VPN ports as an example so I guess that it should allow it, 3) I opened the ports 1723 (TCP) and 500 (UDP) according to some forums. Virtual box has a network configuration parameter in which I can use the VPN network interface created by Teamviewer as a bridged connection. This is suppose to work in the sense that all packets issued by the virtual machine [V] is supposed to go directly to [H]. But I had no internet connection inside [V]. Using the NAT mode, [V] has internet. For me this is the feature that I look for: filtering all connections from the virtual box application to the VPN network interface, and the remaining should use the normal LAN interface. Apart from the build-in feature of VBox, I even do not know if it is possible to route the packet from a given application to a given interface. Finally I tried also SSH tunneling, but this is not the solution I looked for. Using an external SSH server (Linux), I was able to create a localhost connection on [W] (or [V]), using something like 'ssh -N -D server[H]' in order to allow a web browser located in [W] to connect to any website using the SOCKS 5 proxy created locally (SOCKS is a build-in feature of SSH). But repeating the same operation on windows, using a windows SSH server inside [W] (I tried freeSSHd), it failed: SFTP worked, but not the SOCKS tunneling, it was like the browser in [H] did not find internet. Finally only Teamviewer looked able to create a VPN between [W] and [H], but I am not able to use it, as I want, I mean using the Internet connection of [H] sitting in front of [W]. I also tried to bridge the VPN interface and the wifi interface inside [H], but it blocked my laptop, and I tried also the Internet Connection Sharing, trying to share on [H] the wifi connection over the VPN interface. This fails also, but it seems because Teamviewer actually use the wifi interface to be able to provide the VPN link, so I guess I am creating a recursive loop. I do not know what to try next... Thank you for any advice!!

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

  • Can't deploy rails 4 app on Bluehost with Passenger 4 and nginx

    - by user2205763
    I am at Bluehost (dedicated server) trying to run a rails 4 app. I asked to have my server re-imaged, specifying that I do not want rails, ruby, or passenger install automatically as I wanted to install the latest versions myself using a version manager (Bluehost by default offers rails 2.3, ruby 1.8, and passenger 3, which won't work with my app). I installed ruby 1.9.3p327, rails 4.0.0, and passenger 4.0.5. I can verify this by typing, "ruby -v", "rails -v", and "passenger -v" (also "gem -v"). I made sure to install these not as root, so that I don't get a 403 forbidden error when trying to deploy the app. I installed passenger by typing "gem install passenger", and then installed the nginx passenger module (into "/nginx") with "passenger-install-nginx-module". I am trying to run my rails app on a subdomain, http://development.thegraduate.hk (I am using the subdomain to show my client progress on the website). In bluehost I created that subdomain, and had it point to "public_html/thegraduate". I then created a symlink from "rails_apps/thegraduate/public" to "public_html/thegraduate" and verified that the symlink exists. The problem is: when I go to http://development.thegraduate.hk, I get a directory listing. There is nothing resembling a rails app. I have not added a .htaccess file to /rails_apps/thegraduate/public, as that was never specified in the installation of passenger. It was meant to be 'install and go'. When I type "passenger-memory-status", I get 3 things: - Apache processes (7) - Nginx processes (0) - Passenger processes (0) So it appears that nginx and passenger are not running, and I can't figure out how to get it to run (I'm not looking to have it run as a standalone server). Here is my nginx.conf file (/nginx/conf/nginx.conf): #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/thegrad4/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/passenger-4.0.5; passenger_ruby /home/thegrad4/.rbenv/versions/1.9.3-p327/bin/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name development.thegraduate.hk; root ~/rails_apps/thegraduate/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I don't get any errors, just the directory listing. I've tried to be as detailed as possible. Any help on this issue would be greatly appreciated as I've been stumped for the past 3 days. Scouring the web has not helped as my issue seems to be specific to me. Thanks so much. If there are any potential details I forgot to specify, just ask. ** ADDITIONAL INFORMATION ** Going to development.thegraduate.hk/public/ will correctly display the index.html page in /rails_apps/thegraduate/public. However, changing root in the routes.rb file to "root = 'home#index'" does nothing.

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • NMap route determination on Windows 7 x64

    - by user30772
    C:\Windows\system32>nmap --iflist Starting Nmap 6.01 ( http://nmap.org ) at 2012-08-31 06:51 Central Daylight Time ************************INTERFACES************************ DEV (SHORT) IP/MASK TYPE UP MTU MAC eth0 (eth0) fe80::797f:b9b6:3ee0:27b8/64 ethernet down 1500 5C:AC:4C:E9:2D:46 eth0 (eth0) 169.254.39.184/4 ethernet down 1500 5C:AC:4C:E9:2D:46 eth1 (eth1) fe80::5c02:7e48:8fbe:c7c9/64 ethernet down 1500 00:FF:3F:7C:7C:2B eth1 (eth1) 169.254.199.201/4 ethernet down 1500 00:FF:3F:7C:7C:2B eth2 (eth2) fe80::74e4:1ab7:1b7d:a0d0/64 ethernet up 1500 14:FE:B5:BA:8A:C3 eth2 (eth2) 10.0.0.0.253/24 ethernet up 1500 14:FE:B5:BA:8A:C3 eth3 (eth3) fe80::b03e:ddf5:bb5c:5f76/64 ethernet up 1500 00:50:56:C0:00:01 eth3 (eth3) 169.254.95.118/16 ethernet up 1500 00:50:56:C0:00:01 eth4 (eth4) fe80::b175:831d:e60:27b/64 ethernet up 1500 00:50:56:C0:00:08 eth4 (eth4) 192.168.153.1/24 ethernet up 1500 00:50:56:C0:00:08 lo0 (lo0) ::1/128 loopback up -1 lo0 (lo0) 127.0.0.1/8 loopback up -1 tun0 (tun0) fe80::100:7f:fffe/64 point2point down 1280 tun1 (tun1) (null)/0 point2point down 1280 tun2 (tun2) fe80::5efe:a9fe:5f76/128 point2point down 1280 tun3 (tun3) (null)/0 point2point down 1280 tun4 (tun4) fe80::5efe:c0a8:9901/128 point2point down 1280 tun5 (tun5) fe80::5efe:ac14:fd/128 point2point down 1280 DEV WINDEVICE eth0 \Device\NPF_{0024872A-5A41-42DF-B484-FB3D3ED3FCE9} eth0 \Device\NPF_{0024872A-5A41-42DF-B484-FB3D3ED3FCE9} eth1 \Device\NPF_{3F7C7C2B-9AF3-45BB-B96E-2F00143CC2F7} eth1 \Device\NPF_{3F7C7C2B-9AF3-45BB-B96E-2F00143CC2F7} eth2 \Device\NPF_{08116FE5-F0FF-498A-9BF1-515528C57C13} eth2 \Device\NPF_{08116FE5-F0FF-498A-9BF1-515528C57C13} eth3 \Device\NPF_{AA83C6CE-AB2E-4764-92D1-CDEAFBA7AD21} eth3 \Device\NPF_{AA83C6CE-AB2E-4764-92D1-CDEAFBA7AD21} eth4 \Device\NPF_{D0679889-E9D4-411D-BDC5-F4DDB758E151} eth4 \Device\NPF_{D0679889-E9D4-411D-BDC5-F4DDB758E151} lo0 <none> lo0 <none> tun0 <none> tun1 <none> tun2 <none> tun3 <none> tun4 <none> tun5 <none> **************************ROUTES************************** DST/MASK DEV GATEWAY 192.168.153.255/32 eth0 255.255.255.255/32 eth0 255.255.255.255/32 eth0 127.0.0.1/32 eth0 127.255.255.255/32 eth0 255.255.255.255/32 eth0 169.254.95.118/32 eth0 169.254.255.255/32 eth0 10.0.0.0.253/32 eth0 255.255.255.255/32 eth0 10.0.0.0.255/32 eth0 255.255.255.255/32 eth0 192.168.153.1/32 eth0 255.255.255.255/32 eth0 10.0.0.0.0/24 eth0 192.168.153.0/24 eth0 10.10.10.0/24 eth0 10.0.0.0.4 169.254.0.0/16 eth0 127.0.0.0/8 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 0.0.0.0/0 eth0 10.0.0.0.1 JMeterX - I worded that way in hopes of raising answer efficnecy, but that probably wasnt the smartest choice. IMHO the problem (could be a symptom) is that nmap retardedly chooses eth0 as the gateway interface for any and all networks. Here's the result: C:\Windows\system32>nmap 10.0.0.55 Starting Nmap 6.01 ( http://nmap.org ) at 2012-08-31 07:43 Central Daylight Time Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 0.95 seconds C:\Windows\system32>nmap -e eth2 10.0.0.55 Starting Nmap 6.01 ( http://nmap.org ) at 2012-08-31 07:44 Central Daylight Time Nmap scan report for esxy5.dionne.net (10.0.0.55) Host is up (0.00070s latency). Not shown: 991 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 427/tcp open svrloc 443/tcp open https 902/tcp open iss-realsecure 5988/tcp closed wbem-http 5989/tcp open wbem-https 8000/tcp open http-alt 8100/tcp open xprint-server MAC Address: 00:1F:29:59:C7:03 (Hewlett-Packard Company) Nmap done: 1 IP address (1 host up) scanned in 5.29 seconds Just to be clear, this is what makes absolutly no sense to me whatsoever. For reference, I've included similar info from an Ubuntu (that works normally) vm on the affected host below. Jacked Windows 7 **************************ROUTES************************** DST/MASK DEV GATEWAY 192.168.153.255/32 eth0 255.255.255.255/32 eth0 255.255.255.255/32 eth0 127.0.0.1/32 eth0 127.255.255.255/32 eth0 255.255.255.255/32 eth0 169.254.95.118/32 eth0 169.254.255.255/32 eth0 10.0.0.0.253/32 eth0 255.255.255.255/32 eth0 10.0.0.0.255/32 eth0 255.255.255.255/32 eth0 192.168.153.1/32 eth0 255.255.255.255/32 eth0 10.0.0.0.0/24 eth0 192.168.153.0/24 eth0 10.10.10.0/24 eth0 10.0.0.0.4 169.254.0.0/16 eth0 127.0.0.0/8 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 0.0.0.0/0 eth0 10.0.0.0.1 Working Ubuntu VM root@ubuntu:~# nmap --iflist Starting Nmap 5.21 ( http://nmap.org ) at 2012-08-31 07:44 PDT ************************INTERFACES************************ DEV (SHORT) IP/MASK TYPE UP MAC lo (lo) 127.0.0.1/8 loopback up eth0 (eth0) 172.20.0.89/24 ethernet up 00:0C:29:0A:C9:35 eth1 (eth1) 192.168.225.128/24 ethernet up 00:0C:29:0A:C9:3F eth2 (eth2) 192.168.150.128/24 ethernet up 00:0C:29:0A:C9:49 **************************ROUTES************************** DST/MASK DEV GATEWAY 192.168.225.0/0 eth1 192.168.150.0/0 eth2 172.20.0.0/0 eth0 169.254.0.0/0 eth0 0.0.0.0/0 eth0 172.20.0.1 root@ubuntu:~# nmap esxy2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-08-31 07:44 PDT Nmap scan report for esxy2 (172.20.0.52) Host is up (0.00036s latency). rDNS record for 172.20.0.52: esxy2.dionne.net Not shown: 994 filtered ports PORT STATE SERVICE 80/tcp open http 427/tcp closed svrloc 443/tcp open https 902/tcp closed iss-realsecure 8000/tcp open http-alt 8100/tcp open unknown MAC Address: 00:04:23:B1:FA:6A (Intel) Nmap done: 1 IP address (1 host up) scanned in 4.76 seconds

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Resolve Wrong IP from Domain Name only on certain networks

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. There are several strange things about this problem so I will break it down into steps. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I use the automatic update client, but logged in and checked and my no-ip domain has the proper IP tied to it. Here is a link to the homepage through the no-ip domain for reference. Also, I do a ping and a traceroute on the no-ip domain and get: [eckertzs@localhost ~]$ ping -c 1 endradil.noip.me PING endradil.noip.me (65.24.215.99) 56(84) bytes of data. 64 bytes from endradil.noip.me (65.24.215.99): icmp_seq=1 ttl=64 time=2.23 ms --- endradil.noip.me ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 104ms rtt min/avg/max/mdev = 2.233/2.233/2.233/0.000 ms [eckertzs@localhost ~]$ traceroute endradil.noip.me traceroute to endradil.noip.me (65.24.215.99), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.755 ms 5.409 ms 5.380 ms 2 endradil.noip.me (65.24.215.99) 6.297 ms 9.543 ms 10.324 ms Using this domain, I can connect to my webserver without issue or interruption(the https is required to avoid a redirect serverside, but it works). I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. CNAME Record Host: www Points to: endradil.noip.me TTL: 1 hour For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of the past few days, however, the GoDaddy domain has only worked intermittently, for a few minutes at a time and then will go down for hours at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have run online tests of the DNS and have seen that the website is visible by external servers and resolved to the correct IP. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. My personal computers (Windows desktop, linux laptop, android phone) all fail to connect when on my personal wifi. If I disconnect my phone from the wifi and use my AT&T wireless data, it can connect with both domains without issue. When I attempt to use Google webmaster tools to crawl the site using the GoDaddy domain, Google can not find the site. From my linux laptop, I have found some interesting results when I ping or traceroute the domain. The results from these: [eckertzs@localhost ~]$ ping -c 1 www.endradil.com PING www.endradil.com.Belkin (198.105.244.228) 56(84) bytes of data. --- www.endradil.com.Belkin ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 10000ms [eckertzs@localhost ~]$ traceroute www.endradil.com traceroute to www.endradil.com (198.105.244.228), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.918 ms 2.806 ms 2.772 ms 2 cpe-65-24-208-1.insight.res.rr.com (65.24.208.1) 29.247 ms 29.654 ms 30.094 ms 3 cpe-69-23-24-117.new.res.rr.com (69.23.24.117) 15.597 ms 23.218 ms 23.581 ms 4 agg24.clmcohib01r.midwest.rr.com (65.29.1.52) 30.581 ms 30.556 ms 31.192 ms 5 be27.clevohek01r.midwest.rr.com (65.29.1.38) 30.580 ms 31.062 ms 31.038 ms 6 bu-ether25.atlngamq47w-bcr01.tbone.rr.com (107.14.19.38) 37.863 ms 68.844 ms 43.773 ms 7 107.14.17.178 (107.14.17.178) 51.866 ms 51.019 ms 50.989 ms 8 ae0.pr1.dca10.tbone.rr.com (107.14.17.200) 48.467 ms ae-4-0.a0.lax91.tbone.rr.com (66.109.1.113) 49.912 ms * 9 v413.core1.ash1.he.net (209.51.175.33) 60.270 ms 50.842 ms 50.819 ms 10 100ge5-1.core1.nyc4.he.net (184.105.223.166) 55.597 ms 56.045 ms 56.020 ms 11 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 56.001 ms 55.969 ms 55.992 ms 12 * * * both show the incorrect IP. Also, the traceroute timesout on hops 12 through 255 (output truncated above). The traceroute using site24x7 works and shows reasonable results when run from their california server. From another linux box on a different network but in the same city as me (10 miles away), I still get timeout for traceroute, however the IP resolves correctly for the domain. From this I believe that the DNS result is incorrectly cached in either my router/modem or perhaps even at my ISP level. My question is, first, how do I find out exactly what is wrong, and second, how do I resolve it.

    Read the article

  • Deployment Workbench no longer available after PXE boot

    - by Patrick
    Our build process revolves around windows Deployment Workbench. Unfortunately this was setup by someone who is no longer with the company, and no-one has ever dared/needed to make any changes. The other day it stopped working. It turns out that one of our build guys started thinking about changing some stuff in it, clicked something and now it no longer works (He is saying now that he right clicked on the 'LAB' entry in 'Deployment points' and hit 'Update', which took some time to run through apparently). The job has fallen on me to resolve and frankly I'm not sure what I'm doing. I was wondering if someone with more experience than me can provide some pointers as to troubleshooting cos I'm feeling quite a lot in the dark here. On the server I have Deployment Workbench up and running (MMC snapin) version 3.0. There is a WDS service that appears to be running ok, as does the tFTPd service. Nothing specific to this in event logs. From the client side; PXE boot works and gets you to the Win PE launch, and it has the correct company logo as the background (proving to me that its loading win PE from the network). WPEINIT runs, and asks for domain credentials, here the team simply put User/Pass/Domain in the boxes and click ok. Normally the build would kick off. Instead they get an error message saying that the \NATBLU01\Distribution$ share isn't available. Checking \NATBLU01\Distribution$ shows that its there and accessible over the network. Security/permissions seem ok, even 'ANONYMOUS LOGON' has read access to that share so I don't see that being a problem. Digging the trace files from C:\MININT\SMSOSD\OSDLOGS\ after an attempt to run the build I can see an error saying much the same - <![LOG[Validating connection to \\NATBLU01\Distribution$]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[FindFile: The file OSDConnectToUNC.exe could not be found in any standard locations.]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[The network location cannot be reached. For information about network troubleshooting, see Windows Help.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> <![LOG[ERROR - Unable to map a network drive to \\NATBLU01\Distribution$.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> BDD.LOG shows much the same. Full copies of the .LOG files can both files be found here : BDD.LOG LITETOUCH.LOG I can get to a command prompt from the Win PE that boots from PXE, however there isn't any network stuff there. IPCONFIG returns nothing so none of the tests I would usually run resolve anything. I'm at a loss frankly. I did wonder if I could perhaps start a new build process but if the change to the DeploymentWorkbench has knocked it offline I don't think I'm going to be able to create a new deployment. Failing that; we do have a deployment point labeled type 'Media' which appears to be a DVD ISO image of one of the builds, but its dated 2008, is it possible to export the network build to .ISO and build from DVD? We are looking at new hardware to run this from anyway (for the impending Windows 7 roll out) so a temporary work round isn't going to be too much of a problem. All assistance is appreciated! EDIT : OK. Got it working again. Solution was close to Newmanth's idea. The problem was that our PE image didn't appear to be connecting the network. I had an older copy of the PE boot.WIM on a stick that I had been using for other purposes. I booted that and correctly got a network connection. Showed a correct internal IP and could ping out etc etc. However I was still getting the same errors in all the logs and in when wpeinit was running. What I did seperately was to update the PE image that DeploymentWorkbench was pushing out to display a different back ground. I wanted to prove that I was working in the correct place. Turns out that I wasn't. I went and looked at the other deployment stuff we had on this machine, Windows Deployment Services was installed and although all the install images are off line the boot image was online, so I uploaded the copy from my stick to that. Booted straight off. And fixed. Working. Yay! For anyone stumbling across this in the future you may find that although your deployment images are located in the DeploymentWorkbench, the Win PE boot image you are launching from is located in the associated Windows Deployment Services images.

    Read the article

  • Linux Kernel crash mutex_lock_slowpath "blocked for more than 120 seconds". What to do?

    - by Roddick
    I have out-of-the box Debian Lenny with non-custom kernel 2.6.26-2-amd64. Brand new server that is used to 5% of it's potential, CPU and Disk-wise. Meaning it probably not crashing because of overload. every few days it freezes with hundreds of these messages in console log: : [284847.828428] INFO: task apache2:12473 blocked for more than 120 seconds. : [284847.868468] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.912759] apache2 D ffff8101bc6b7ab0 0 12473 14358 : [284847.912763] ffff810160d5bc50 0000000000000082 ffff8101c0002e40 0000000000000000 : [284847.912766] ffff8101a7c42950 ffff810327d92810 ffff8101a7c42bd8 0000000400000044 : [284847.912770] ffff8101c0002e40 00000000000612d0 0000000000000000 00000040000612d0 : [284847.912773] Call Trace: : [284847.912786] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.912790] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.912794] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.912800] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.912805] [<ffffffff80295844>] kmem_getpages+0x96/0x15f : [284847.912808] [<ffffffff80295fb7>] ____cache_alloc_node+0x6d/0x106 : [284847.912814] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.912819] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.912822] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.912827] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.912831] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.912840] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.912848] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f Almost all traces has __mutex_lock_slowpath as top-level. Only some has different trace: : [284847.737386] INFO: task apache2:12472 blocked for more than 120 seconds. : [284847.777551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.824881] apache2 D ffff8101bc6b7ab0 0 12472 14358 : [284847.824886] ffff8101b9cc1c50 0000000000000086 ffffffffa0131e0a 0000000000000002 : [284847.824889] ffff8102e7454300 ffff810324c6cad0 ffff8102e7454588 0000000000000000 : [284847.824893] 0000000000000001 0000000000000296 0000000000000003 ffff8101b9cc1c58 : [284847.824896] Call Trace: : [284847.828403] [<ffffffffa0131e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 : [284847.828412] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.828418] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.828421] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.828427] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.828428] [<ffffffff80271296>] find_lock_page+0x1f/0x8a : [284847.828428] [<ffffffff80273182>] filemap_fault+0x1c2/0x33c : [284847.828428] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.828428] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.828428] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.828428] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.828428] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.828428] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.828428] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f kernel: [1912668.466347] INFO: task apache2:17984 blocked for more than 120 seconds. [1912668.507035] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [1912668.555165] apache2 D ffff8101c5637ba0 0 17984 17282 : [1912668.596752] ffff810166a7dd30 0000000000000086 0000000000000000 ffff810166a7dcd8 : [1912668.643341] ffff8101c563c880 ffff81024505f000 0000000000000002 ffff810166a7dd68 : [1912668.699566] 0000000000000086 00000000000cb1a0 0000000000000000 ffff81017f344d60 : [1912668.744773] Call Trace: : [1912668.761754] [<ffffffff8022a3ed>] pick_next_task_fair+0x6e/0x7a : [1912668.829311] [<ffffffff802be0e2>] bio_alloc_bioset+0x89/0xd9 : [1912668.861930] [<ffffffff8024ac3a>] getnstimeofday+0x39/0x98 : [1912668.897005] [<ffffffff802710f6>] sync_page+0x0/0x41 : [1912668.927868] [<ffffffff80429487>] io_schedule+0x5c/0x9e : [1912668.960286] [<ffffffff80271132>] sync_page+0x3c/0x41 : [1912668.991756] [<ffffffff804295fa>] __wait_on_bit_lock+0x36/0x66 : [1912669.031757] [<ffffffff802710e3>] __lock_page+0x5e/0x64 : [1912669.064191] [<ffffffff802461d3>] wake_bit_function+0x0/0x23 : [1912669.100100] [<ffffffff80281bc5>] handle_mm_fault+0x5e4/0x8de : [1912669.134531] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.174623] [<ffffffff802aa108>] fcntl_setlk+0x1cf/0x291 : [1912669.210623] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.246923] [<ffffffff802a677f>] sys_fcntl+0x280/0x2f7 After googling for "mutex_lock_slowpath" I can only find the Kernel mailing list discussions that this issue was introduced in some commit. Wthout reference to verison. Discussions as recent as Jan 25, 2011. The Kernel I am using is form Debian Lenny, year ago. What should I do? Is this bug even fixed in kernel? if it's such obvious bug why it happens so rarely? Should I download latest kernel from kernel.org and upgrade? Should I use Debian backports to install new "Approved" kernel? Am I missing something? What to do?

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • nginx returning authentication in IE

    - by James MacLeod
    I am having a few issues with an nginx server. I have a site setup that keeps requesting authentication when accessed from IE but in firefox and safari the site is fine no request for authentication. Reading around the web I can see that it could be the gzip that may be causing errors, but the other sites are working without issue. Here is the config: user sysadmin sysadmin; worker_processes 8; error_log logs/error.log debug; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.9; passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; client_header_timeout 3m; client_body_timeout 3m; client_max_body_size 5m; send_timeout 3m; client_header_buffer_size 1k; large_client_header_buffers 4 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; server { listen 80; server_name .reg-power.com .reg-power.co.uk .reg-power.eu .reg-power.eu.com .reg-power.net .reg-power.org .reg-power.org.uk .reg-power.uk.com .regegen.eu .regpower.co.uk .regpower.eu .regpower.eu.com .regpower.net .regpower.org .regpower.org.uk .regpower.uk.com .renegen.com .renegen.eu .renewableenergygeneration.co.uk .renewableenergygeneration.com reg.rails1.flowhost.co.uk; root /home/sysadmin/reg/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name media.reg-power.com; root /home/sysadmin/admin/current/public; index index.html; } server { listen 80; server_name admin.reg-power.com admin.rails1.flowhost.co.uk; root /home/sysadmin/admin/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name .livingfuels.co.uk livingfuels.rails1.flowhost.co.uk; root /home/sysadmin/livingfuels/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name .regbiopower.com .regbiopower.co.uk regbiopower.rails1.flowhost.co.uk; root /home/sysadmin/regbiopower/current/public; passenger_enabled on; rails_env production; index index.html; } server { listen 80; server_name .clpwindprojects.co.uk clp.rails1.flowhost.co.uk; access_log /home/sysadmin/clp/logs/access.log; location / { root /home/sysadmin/clp; index index.php; if (-f $request_filename) { expires 30d; break; } if (!-e $request_filename) { rewrite ^(.+)$ /index.php?q=$1 last; } } location ~ .php$ { fastcgi_pass 127.0.0.1:49232; #this must point to the socket spawn_fcgi is running on. fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/sysadmin/clp$fastcgi_script_name; # same path as above fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /home/sysadmin/clp; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; } } } As you can see there is no reference to a http authentication

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors - it has a NEC uPD720102 chipset. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine? Edit: Updated the USB 2.0 info with reference to actual card - http://www.xpcgear.com/lpnec4u.html

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Windows 2012 - WDS unattend Partition

    - by joe
    I'm trying to install Windows 2012 via Windows 2012 WDS. The installer displays the following error message: the partition selected for the installation (1) does not exist on disk 0. Make sure the unattend answer file's imageselection \installimage setting references a valid partition on this computer, and then restart the installation. the unattend file (created by the "Create Client Unattend" dialog) <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="windowsPE"> <component name="Microsoft-Windows-Setup" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="x86"> <WindowsDeploymentServices> <Login> <WillShowUI>OnError</WillShowUI> <Credentials> <Username>administrator</Username> <Domain>test</Domain> <Password>xxxx</Password> </Credentials> </Login> <WillWipeDisk>true</WillWipeDisk> <DiskConfiguration> <WillShowUI>OnError</WillShowUI> <Disk> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> <CreatePartitions> <CreatePartition> <Order>1</Order> <Type>Primary</Type> <Extend>true</Extend> </CreatePartition> </CreatePartitions> </Disk> </DiskConfiguration> <ImageSelection> <WillShowUI>OnError</WillShowUI> <InstallImage> <ImageGroup>ImageGroup1</ImageGroup> <ImageName>Windows Server 2012 SERVERDATACENTER</ImageName> <Filename>install-(4).wim</Filename> </InstallImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>1</PartitionID> </InstallTo> </ImageSelection> </WindowsDeploymentServices> </component> <component name="Microsoft-Windows-International-Core-WinPE" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="x86"> <SetupUILanguage> <UILanguage>en-US</UILanguage> </SetupUILanguage> <InputLocale>en-US</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UserLocale>en-US</UserLocale> </component> <component name="Microsoft-Windows-Setup" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="amd64"> <WindowsDeploymentServices> <Login> <WillShowUI>OnError</WillShowUI> <Credentials> <Username>administrator</Username> <Domain>test</Domain> <Password>xxxxx</Password> </Credentials> </Login> <ImageSelection> <WillShowUI>OnError</WillShowUI> <InstallImage> <ImageGroup>ImageGroup1</ImageGroup> <ImageName>Windows Server 2012 SERVERDATACENTER</ImageName> <Filename>install-(4).wim</Filename> </InstallImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>1</PartitionID> </InstallTo> </ImageSelection> </WindowsDeploymentServices> </component> <component name="Microsoft-Windows-International-Core-WinPE" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="amd64"> <SetupUILanguage> <UILanguage>en-US</UILanguage> </SetupUILanguage> <InputLocale>en-US</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UserLocale>en-US</UserLocale> </component> </settings> </unattend> Any idea why it is not working? thanks

    Read the article

  • Nginx HTTPS redirects causing loop

    - by Ben Chiappetta
    I've been banging my head against the wall trying to figure this out, so if anyone can help I'd appreciate it. My Nginx conf has three different redirect loops, haven't been able to get any of the three to work right. The three problem areas are: Redirecting memcache directory to SSL Redirecting accounts directory to SSL Redirecting SSL to www if non-www nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log notice; sendfile on; #tcp_nopush on; keepalive_timeout 65; proxy_set_header X-Url-Scheme $scheme; #gzip on; rewrite_log on; include /etc/nginx/conf.d/*.conf; } conf.d/default.conf: server { listen 80; server_name <redacted>.net; rewrite ^(.*) http://www.<redacted>.net$1; } server { listen 80; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; #charset koi8-r; access_log /var/log/nginx/host.access.log main; root /var/www/html; index index.php index.html index.htm; location =/memcache { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } location /accounts { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; try_files $uri = 404; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } conf.d/ssl.conf: # HTTPS server # server { listen 443; server_name <redacted>.net; rewrite ^(.*) https://www.<redacted>.net$1; } server { listen 443 default_server ssl; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded_Proto https; proxy_set_header Host $host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_set_header X-Forwarded-Ssl on; set $https_enabled on; ssl_certificate <redacted>.crt; ssl_certificate_key <redacted>.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root /var/www/html; index index.php index.html index.htm; location /memcache { auth_basic "Restricted"; auth_basic_user_file $document_root/memcache/.htpasswd; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include /etc/nginx/fastcgi_params; try_files $uri = 404; } }

    Read the article

  • How to decrypt an encrypted Apple iTunes iPhone backup?

    - by afit
    I've been asked by a number of unfortunate iPhone users to help them restore data from their iTunes backups. This is easy when they are unencrypted, but not when they are encrypted, whether or not the password is known. As such, I'm trying to figure out the encryption scheme used on mddata and mdinfo files when encrypted. I have no problems reading these files otherwise, and have built some robust C# libraries for doing so. (If you're able to help, I don't care which language you use. It's the principle I'm after here!) The Apple "iPhone OS Enterprise Deployment Guide" states that "Device backups can be stored in encrypted format by selecting the Encrypt iPhone Backup option in the device summary pane of iTunes. Files are encrypted using AES128 with a 256-bit key. The key is stored securely in the iPhone keychain." That's a pretty good clue, and there's some good info here on Stackoverflow on iPhone AES/Rijndael interoperability suggesting a keysize of 128 and CBC mode may be used. Aside from any other obfuscation, a key and initialisation vector (IV)/salt are required. One might assume that the key is a manipulation of the "backup password" that users are prompted to enter by iTunes and passed to "AppleMobileBackup.exe", padded in a fashion dictated by CBC. However, given the reference to the iPhone keychain, I wonder whether the "backup password" might not be used as a password on an X509 certificate or symmetric private key, and that the certificate or private key itself might be used as the key. (AES and the iTunes encrypt/decrypt process is symmetric.) The IV is another matter, and it could be a few things. Perhaps it's one of the keys hard-coded into iTunes, or into the devices themselves. Although Apple's comment above suggests the key is present on the device's keychain, I think this isn't that important. One can restore an encrypted backup to a different device, which suggests all information relevant to the decryption is present in the backup and iTunes configuration, and that anything solely on the device is irrelevant and replacable in this context. So where might be the key be? I've listed paths below from a Windows machine but it's much of a muchness whichever OS we use. The "\appdata\Roaming\Apple Computer\iTunes\itunesprefs.xml" contains a PList with a "Keychain" dict entry in it. The "\programdata\apple\Lockdown\09037027da8f4bdefdea97d706703ca034c88bab.plist" contains a PList with "DeviceCertificate", "HostCertificate", and "RootCertificate", all of which appear to be valid X509 certs. The same file also appears to contain asymmetric keys "RootPrivateKey" and "HostPrivateKey" (my reading suggests these might be PKCS #7-enveloped). Also, within each backup there are "AuthSignature" and "AuthData" values in the Manifest.plist file, although these appear to be rotated as each file gets incrementally backed up, suggested they're not that useful as a key, unless something really quite involved is being done. There's a lot of misleading stuff out there suggesting getting data from encrypted backups is easy. It's not, and to my knowledge it hasn't been done. Bypassing or disabling the backup encryption is another matter entirely, and is not what I'm looking to do. This isn't about hacking apart the iPhone or anything like that. All I'm after here is a means to extract data (photos, contacts, etc.) from encrypted iTunes backups as I can unencrypted ones. I've tried all sorts of permutations with the information I've put down above but got nowhere. I'd appreciate any thoughts or techniques I might have missed.

    Read the article

< Previous Page | 807 808 809 810 811 812 813 814 815 816 817 818  | Next Page >