Search Results

Search found 6069 results on 243 pages for 'admin'.

Page 43/243 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Installing Glassfish 3.1 on Ubuntu 10.10 Server

    - by andand
    I've used the directions here to successfully install Glassfish 3.0.1 on an virtualized (VirtualBox and VMWare) Ubuntu 10.10 Server instance without any real difficulty not resolved by more closely following the directions. However when I try applying them to Glassfish 3.1, I seem to keep getting stuck at section 6. "Security configuration before first startup". In particular, there are some differences I noted: 1) There are two keys in the default keystore. The 's1as' key is still there, but another named 'glassfish-instance' is also there. When I saw this, I deleted and recreated them both along with a 'myAlias' key which I was going to use where needed. 2) When turning the security on it seems like part of the server thinks it's on, but others don't. For instances: $ /home/glassfish/bin/asadmin set server-config.network-config.protocols.protocol.admin-listener.security-enabled=true server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command set executed successfully. $ /home/glassfish/bin/asadmin get server-config.network-config.protocols.protocol.admin-listener.security-enabled server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command get executed successfully. $ /home/glassfish/bin/asadmin --secure list-jvm-options It appears that server [localhost:4848] does not accept secure connections. Retry with --secure=false. javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Command list-jvm-options failed. $ /home/glassfish/bin/asadmin --secure=false list-jvm-options -XX:MaxPermSize=192m -client -Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder -XX: UnlockDiagnosticVMOptions -Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed -Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy -Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Xmx512m -Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks -Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks -Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.in stanceRoot}/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command -Dosgi.shell.telnet.port=6666 -Dosgi.shell.telnet.maxconn=1 -Dosgi.shell.telnet.ip=127.0.0.1 -Dgosh.args=--nointeractive -Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/ -Dfelix.fileinstall.poll=5000 -Dfelix.fileinstall.log.level=2 -Dfelix.fileinstall.bundles.new.start=true -Dfelix.fileinstall.bundles.startTransient=true -Dfelix.fileinstall.disableConfigSave=false -XX:NewRatio=2 Command list-jvm-options executed successfully. Also the admin console responds only to http (not https) requests. Thoughts?

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • rkhunter not using root external email on ubuntu

    - by Zen
    i have installed rkhunter on ubuntu 10.04 LTS when i try to test rkhunter report it doesn't send email to my external root email recipient. i can send email only editing /etc/default/rkhunter by replacing this row REPORT_EMAIL="root" with the desired recipient REPORT_EMAIL="[email protected]" These are my config file settings: /root/.forward [email protected] and /etc/aliases root: [email protected] But it doesn't work with root recipient. Any suggestion?

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • /etc/crontab or any user crontab is not being executed

    - by ian
    My server is CentOS 5. When I edit /etc/crontab or edit any user(including root) crontab via "crontab -e" command, it just adds "(system) RELOAD (/etc/crontab)" or "(admin) RELOAD (cron/admin)" in the log. No CMD in the /var/log/cron. Sample entry in /var/log/cron: Aug 10 10:21:33 localhost crontab[31688]: (root) BEGIN EDIT (root) Aug 10 10:21:42 localhost crontab[31688]: (root) REPLACE (root) Aug 10 10:21:42 localhost crontab[31688]: (root) END EDIT (root) Aug 10 10:22:01 localhost crond[2688]: (root) RELOAD (cron/root) Result of "service crond status": crond (pid 1345) is running... The command "cat /var/log/messages | grep cron" does not give anything. Contents of /etc/cron.allow: admin root Contents of /etc/crontab: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly * * * * * root run-parts /bin/date >> /data/date.txt Result of ps aux |grep cron: root 1345 0.0 0.1 5268 1204 ? Ss 11:43 0:00 crond Contents of admin's crontab: * * * * * /bin/date >> /data/date.txt Note that it's not only admin's crontab that's not running. All cron jobs are not running. Any ideas why they aren't running?

    Read the article

  • 500 error with deploying rails application via apache2+passenger

    - by user1633983
    I finally completed my own app, so the only work left is deploying the app. I'm using Ubuntu 10.04 and apache2(installed by apt-get), so I'm trying to deploy through passenger. I installed passenger gem like this: sudo gem install passenger rvmsudo passenger-install-apache2-module and I configured apache settings as what the installation message says. I added below lines in the middle of /etc/apache2/apache2.conf file. LoadModule passenger_module /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17/ext/apache2/mod_passenger.so PassengerRoot /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17 PassengerRuby /home/admin/.rvm/wrappers/ruby-1.9.3-p194/ruby and, I appended below lines in /etc/apache2/sites-available/default file. <VirtualHost *:80> ServerName localhost # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /home/admin/homepage/public <Directory /home/admin/homepage/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> But when I restart the apache service and hit the address, 500 error occurs. At first, it was same 500 error but the 500 error page is from apache's, but when I reinstalled the libapache2-module-passenger, the 500 error page is changed to that from rails'. Because of rails' 500 error page(which is located at public/500.html), I think passenger module is properly connected with apache. What should I do to fix this problem? Do I need to configure something inside my app before deployment?

    Read the article

  • Caused by: java.net.SocketException: Software caused connection abort: socket write error

    - by jrishere
    I running JSP on Oracle 11g, Weblogic 10.3.4. I have 2 managed server and a oracle admin server installed. I am encountering an error where intermittently the log file of the 2 managed server and admin server will show java.net.SocketException: Software caused connection abort: socket write error. The application can run for 2 days without showing this error or it can show up a few times in a day. The server load are similar everday. When this error is been encountered, the server will just stop accepting connections and will not be able to access the application. Even if I try to access the application through localhost, I will not be able to access the JSP pages and a 503 http status is shown but then I am able to access the static HTML page. I will not be able to access the Oracle 11g Weblogic admin console page. When I take a look at admin server log, it shows that the managed servers are disconnected from the admin server and vice versa. Magically the application is able to recover by its own and the application is able to access again or I need to restart the server as restarting the service of the application does not work. The FTP connections that the application is connected to are closed as well. I am able to ping to telnet to the server port. The event log doesn't seem to be leaving any information. We did run wireshark to see the packet traffic and it seems that the application port is sending a RST, ACK packet to the load balancer. Any kind help will greatly be appreciated. Should you need more info, feel free to ask me. Thanks in advance.

    Read the article

  • Email not working on testing site in Plesk before DNS switch

    - by Dilip Rajkumar
    I have to test my website before my DNS is swtiched to the new server. My New server is having Plesk. I changed my hosts file to point to the new server and I tested the site. My Site is working fine. However, When I try to register I have 2 emails sent one to the user and one to the admin. Admin email id is same as the server name for example my site name is test.com the admin email is [email protected]. So email is not sent to the admin. I know the email is not sent because the Plesk his searching its own dns instead of Global public DNS. Do any one know how to make my site see public DNS on sending email in Plesk. If I set Goole Public DNS 8.8.8.8 for MS record will it work.. Please guide me. Thanks in advance..

    Read the article

  • Using Exim and Google Apps email as smarthost

    - by pferrel
    I have a server setup to use exim4 and google apps as my smarthost. But I get errors when the to address is not the one I use to authenticate to google and it seems to drop all return addresses that are not the one it uses to authenitcate. Example: On the contact form of my server a user sets [email protected] as their return address and uses the form to send a message. I get an email sent to the admin's address [email protected] but the return address is also now [email protected] I have no idea of the return address the user set on the form. I get around this by putting a bad email address in the form's default so Exim4 sends an error message to [email protected] with the user's email in the debug info. Clearly I either have it set up wrong or do not understand how smarthosts work (probably both).

    Read the article

  • How-to make "ghost" of existing user in lab manager?

    - by clevi
    We have an automation process which undeploys specific configuration on LM every night. this process also deploy the configuration at morning. The problem is that this process works as admin so all this configs deployed by admin, which means sometimes they deployed on "reserved" resources pool. I want to try to impersonate a user on Lab Manager so the configuration will be deployed by the user owned this machine, and not by admin. does anyone has any idea how to do so?

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • Can't set acl for afs even though i'm in the right group

    - by Torandi
    I've got a dir that i've lost controll over in an afs system. According to the system adminstrators my adminin subgroup (dsekt:admin) got rlidwka on the dir. I'm a member of this group (and i can list the members of the group and see my nick there) though I can't set the acl's for the dir. The pts: $pts membership dsekt:admin Members of dsekt:admin (id: -6813) are: /.../ taran And my klist. $>klist Credentials cache: FILE:/tmp/krb5cc_56782 Principal: [email protected] Both dsekt:admin and the dir is on the NADA.KTH.SE node.

    Read the article

  • Can I split a spreadsheet into multiple files based on a column in Excel 2007?

    - by geofftnz
    Is there a way in Excel to split a large file into a series of smaller ones, based on the contents of a single column? eg: I have a file of sales data for all sales reps. I need to send them a file to make corrections and send back, but I dont want to send each of them the whole file (because I dont want them changing eachother's data). The file looks something like this: salesdata.xls RepName Customer ContactEmail Adam Cust1 [email protected] Adam Cust2 [email protected] Bob Cust3 [email protected] etc... out of this I need: salesdata_Adam.xls RepName Customer ContactEmail Adam Cust1 [email protected] Adam Cust2 [email protected] and salesdata_Bob.xls Bob Cust3 [email protected] Is there anything built-in to Excel 2007 to do this automatically, or should I break out the VBA?

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Sometimes my urls get masked with the IP address instead of the domain

    - by user64631
    I have a server with one A record that points to my IP address. I have nginx with gunicorn as a prefork which goes to my django application For most of my pages, the URL is always my domain name in the url bar. However if I go to mydomain.com/admin the url magically transforms into x.x.x.x/admin in the url bar of my browser. I thought that was weird but I ignored it figuring it only happened for admin so it wasnt that big of a deal. Then I installed django-registration. So when I go to mydomain.com/accounts/register the url is still mydomain.com/accounts/register in the url bar. but when I submit a form, the POST request goes to x.x.x.x/accounts/register which creates a cross domain error. So I decided that it wasnt isolated to the admin and I really need to fix what is going on. I have no idea what is going on and am completely lost.

    Read the article

  • Unable to commit file through svn, server sent truncated HTTP response body

    - by Rocket3G
    I have my own VPS, on which I want to run a simple SVN + chiliproject setup. I have re-installed SVN, CHILI and the OS several times, and it always works for a couple of hours/days and then it just stops working. Well, everything works, except I can't upload any files. Committing directories seems to work just fine, but when I try to commit a file it breaks. I have an error log file, which gives me the following text when I try to commit something x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "OPTIONS /project HTTP/1.1" 200 149 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 346 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 401 345 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 201 262 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 236 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/vcc/default HTTP/1.1" 201 271 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PROPPATCH /project/!svn/wbl/c11d45ac-86b6-184a-ac5a-9a1105d64563/1 HTTP/1.1" 207 267 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/ver/1 HTTP/1.1" 201 271 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "HEAD /project/index.html HTTP/1.1" 404 - x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PUT /project/!svn/wrk/c11d45ac-86b6-184a-ac5a-9a1105d64563/index.html HTTP/1.1" 201 269 x.x.x.x - admin [19/Oct/2013:00:02:04 +0200] "DELETE /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 204 - So it seems that it PUTs the file (test.html) correctly, and somehow somewhere something is wrong (file permissions are alright, when I purposely stated that they are wrong, it gave me errors, which is expected, and they were about the file permissions being incorrect. The odd thing is that files won't get added, but directories are fine. I also have enough storage left on my machine. What I should note, perhaps, is that I use Ubuntu 12.04.3 with ruby 1.9.3, mysql 14.14 and I have it set up that Chiliproject handles the authentication and authorization for the project. It works, because I can commit directories and read it all correctly, though I can't upload files. Help would really be appreciated, as I don't know what on earth is going on with this 'truncated http response body'. I tried to read them with wireshark, but it basically gave me the same information. With regards, Ps. I have no clue what the delay between put and delete is, as it's a file of a mere 500 bytes, so it's uploaded in approximately a second. Pps. I copied this question from StackOverflow to this site, as I didn't know the existence of this site and another user suggested that I'd get more answers here, as it's basically a server fault.

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • apache proxypass to webmin

    - by Ricardo
    I have a problem with apache2 webmin redirect. My ProxyPass is: ProxyRequests Off ProxyPreserveHost On SSLProxyEngine On ProxyPass /admin/webmin/ https://localhost:10000/ ProxyHTMLURLMap https://localhost:10000 /admin/webmin <Location /admin/webmin/> ProxyHTMLExtended On SetOutputFilter proxy-html ProxyPassReverse https://localhost:10000/ ProxyPassReverse https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com:10000/ Order allow,deny Allow from all </Location> When I connect using https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com:10000/ there is no problem. But when I connect use https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com/admin/webmin the page lost css and after login show me the error: The requested URL /session_login.cgi was not found on this server. I think is an error with my ProxyPass but I don´t know what is.

    Read the article

  • UAC prompt won't allow non-administrator to access IIS 7 Manager.

    - by Triynko
    The IIS 7 Manager seems to throw up a UAC prompt requiring administrative elevation as soon as the shortcut is clicked. This makes it impossible for a non-admin to use it. When I first created the user account, I could open IIS 7 Manager, but could not connect. I temporarily made the user account an Admin, then opened IIS 7 Manager again. Now that I've removed user from Admin, every time I open IIS 7 Manager it throws up the UAC prompt, making it impossible to do anything with it as a non-Admin. Is this a bug? Is there a way to reset the settings for this user so that IIS 7 Manager is back in some kind of first-run state?

    Read the article

  • How to let mod_wsgi only handle certain URLs under Apache?

    - by Frederik
    I have a Django app that handles "/admin/" and "/myapp/". All the other requests should be handled by Apache. I've tried using LocationMatch but then I'd have to write a negative regex. I've tried WSGIScriptAlias with the /admin/ prefix but then the wsgi_handler receives the request with the /admin/ part cut off. Is there a cleaner way to make mod_wsgi only handle certain requests?

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • Openfire on Mac OS X: can't log in after setup

    - by Tom
    Hey all, I'm trying to set up Openfire (http://www.igniterealtime.org/projects/openfire/) on Mac OS X. The install goes well, and I can start the server and enter the admin console via its System Preferences pane. I run the setup, including specifying the password for the admin user. However, when I try to log into the admin console, I get the message "Login failed: make sure your username and password are correct and that you're an admin or moderator." What gives? I've tried to RTFM, but the documentation seems to be really sketchy. Nowhere is the setup process mentioned in the install docs.

    Read the article

  • IE Sessions in Terminal Server

    - by Jacob
    Currently we are using WADE middleware for our order processing operation. We have about 40 operators that use 1 terminal server to open IE 8 to access the WADE middleware. To me, it's random, but every now and then someone will come to me and tell me that IE has a "Page cannot be displayed" or "HTTP Error 500" error. I did a bit of testing on my local machine and I never get this error while doing normal operations. Although, when I open one session with username "test" and then login to the wade admin console as admin, I run into problems. I do not run into problems until I logout of the wade admin. Once I logout of the Wade admin, my "test" session says "page cannot be displayed". This makes me think the IE user sessions on the terminal service are cross talking. Does anyone have any possible settings I can change in IE or do you think this is an issue with the middleware? The Terminal Server is Windows 2003, btw.

    Read the article

  • Changing Administrator password on a Windows 2008 web server

    - by Nick
    I've just taken over the administration of a Windows 2008 web server from a previous employee on a temporary basis. I need to change the Admin password as soon as I can but I've noticed that quite a few of the services also run under this account. So: Is there a quick way to find out which services will be affected by me changing the password or is it a question of going down the list? It doesn't seem right to me that the Admin account is used in this manner; should I create a different account for these services, or is using the Admin a/c standard practice? I realize everyone's servers / networks are set up differently, but are there any other items I should be aware of when changing the Admin password? Thanks

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >