Search Results

Search found 17845 results on 714 pages for 'python social auth'.

Page 491/714 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • Are there other application layer firewalls like Microfot TMG (ISA) that do advanced http rules?

    - by Bret Fisher
    Since the old days ISA and now TMG have had several great features that I often want to deploy to my customers because of the enhanced functionality and security, but often the cost of an additinal server HW, Windows Server, and TMG license is too much to justify when compaired to a $300-500 appliance. Are there other gateway firewalls that can perform one or more of these application layer features: pre-auth incoming http traffic against AD/LDAP before sending packets to internal server (forms auth or basic creds popup)? read host headers of incoming http traffic (even on https) to a single public IP and route packets to different internal servers based on that host header?

    Read the article

  • cannot at all find sql instance (while installing an asp.net app on IIS)

    - by giddy
    So I'm really not a DBA, I'm an app dev. I had to install my asp.net mvc3 app on my client's(a large company) IIS6 + Win2k3 machine, with absolutely no help from their sysadmins. The final problem now is SQL Server 2008 r2, after figuring out how to create a login from windows, my app and sqlcmd.exe always complains it cannot find a sql server instance!! I have all the sql services (in services.msc) running to Log On as the local system. I can login fine with SQL Server Management Studio with Windows Auth. I created my database, my asp.net app needs/uses windows auth. But for the love of God, whatever I do my app always complains it cannot find the instance. (Also tried running SQL CMD and it complains of the same thing too!) My data base connection string looks like this: Data Source=machinename\username;Initial Catalog=myDataStore;Integrated Security=True;MultipleActiveResultSets=True Machinename\user is the same thing that shows up on the sql server management studio login if I choose windows authentication right?

    Read the article

  • Migrating to LDAP

    - by Frank Brenner
    Hi Folks, I've started a new job at a house where they've got an amazingly unruly patchwork of Linux, xBSD, and OpenSolaris boxes. Every box has its own user auth using local /etc/passwd, etc. Users/Groups have differing uids/gids on each machine, and each machine has its own /home/ tree. (no central NAS /homes) My job is get get everything into an LDAP directory and use that for login auth. How do I get LDAP to deal with the differing uids/gids? Thanks.

    Read the article

  • How to setup Wordpress High Availability

    - by Ketam
    I have installed Galera Cluster on 3 cluster + 1 management. I wanted to make it like this, Server1: Home (www.domain.com) Server2: For BBpress/Forum (Forum Tab Menu will forward to forum.domain.com) Server3: BuddyPress Activity (Social Tab Menu will forward to social.domain.com) The purpose I am doing this is to distribute my resource and load balancing each other at same time. However, I have difficulty to setup Apache Load-Balancing/mod_proxy/clustering or any suitable to have high availability WordPress. Any best suggestion/solution to make high availability WordPress? Or how to? And another question is I tried to copy whole WordPress files & folders to Server2 connecting to local database (same data inside since it is already on Galera Cluster) but the page blank. Any advice? OS: Centos 6.2 Thanks in advanced.

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Google Chrome and kerberos authentication against Apache

    - by Lars
    I've managed to get kerberos authentication to work now with Apache and Likewise Open but so far, Google Chrome doesn't seem to play fair. Unless I start it with chrome.exe --auth-server-whitelist="*company.com" it does only pop-up a login window but will not accept any credentials at all. As far as I know, the --auth-server-whitelist option should only be used when trying to get Single-Sign-On (SSO) to work, but if you are fine with a log-in window it should work directly out of the box, but so far it doesn't. This is the error I get in the apache logs. [Tue Dec 13 08:49:04 2011] [error] [client 192.168.1.15] failed to verify krb5 credentials: Unknown code krb5 7

    Read the article

  • Puppet exported resource naming

    - by Tim Brigham
    I am working on setting up a collection of Splunk nodes to be deployed by Puppet. One of the steps in this process is importing the trusts to allow these nodes to automatically find each other. I've looked over several options and it appears that exported resources are the only ready way to go for this to work. The files I need to create are under /opt/splunk/etc/auth/distServerKeys//trusted.pem. The source for each of these files should be /opt/splunk/etc/auth/distServerKeys/trusted.pem, one per node. What syntax do I need to make this work? The samples I've looked at all appear to have the same source and destination file name.

    Read the article

  • Granting access to authzTo attribute

    - by bemace
    I'm trying to grant certain accounts auth access to their authzTo attribute in order to allow proxied authorization. I tried adding this ldif: dn: olcDatabase={-1}frontend,cn=config changetype: modify add: olcAccess olcAccess: {1}to authzTo by dn.children="ou=Special Accounts,dc=example,dc=com" auth - using the command ldapadd -f perm.ldif -D "cn=admin,cn=config" -W but got this error: modifying entry "olcDatabase={-1}frontend,cn=config" ldap_modify: Other (e.g., implementation specific) error (80) additional info: <olcAccess> handler exited with 1 using verbose output and turning up the debug level haven't given me any more clues. Can anyone see what I'm doing wrong?

    Read the article

  • Apache equivalent of vsftpd's local_enable

    - by Reinderien
    vsftpd has an option local_enable that allows FTP users to be directly mapped to local users. It even works without any extra effort with our Likewise Active Directory configuration. I've been looking all over, and I can't seem to find an equivalent for Apache .htaccess. The auth providers seem to be file, DBM, LDAP and DBD. None of these seem to allow for HTTP auth user mapping to local user accounts. Is there any way to do this? If not, why not? Thanks.

    Read the article

  • SharePoint doesn't support this authentication scheme.

    - by EtherDragon
    I have a new Windows Phone 7 phone, and I'm trying to investigate how to connect the Office application to our SharePoint site(s). In the Office application, on Phone 7, I flip to the SharePoint page. I go to open URL, and enter the url for one of my sites, that uses default authentication (Windows Auth). I get a message: Can't open SharePoint doesn't support this authentication scheme. For assistance, contact the person who manages thus SharePoint site (That would be me). You can try opening the content in your web browser instead. When opening in my browser, I can access the content without any problem. (Windows Auth passes) Anyone have any source material on what I should do to my SharePoint site to "support this authentication scheme?" Note: I am the administrator of our SharePoint server farm(s).

    Read the article

  • postfix smtps issue

    - by DavidC
    Im currently experiencing the following issue with postfix over ssl (smtps) Apr 7 13:43:55 server88-208-248-147 postfix/smtpd[5777]: connect from xxxxxxxxxxxxxxx[xxx.xxx.xxx.xxx] Apr 7 13:45:09 server88-208-248-147 postfix/smtpd[5777]: lost connection after UNKNOWN from xxxxxxxxxxxxxxx[xxx.xxx.xxx.xxx] Apr 7 13:45:09 server88-208-248-147 postfix/smtpd[5777]: disconnect from xxxxxxxxxxxxxxx[xxx.xxx.xxx.xxx] my main.cf is as follows: smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtp_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/caroot.crt smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_loglevel = 1 when accessing smtp and running start tls i get the following: # telnet xxxxxxxxxxxxxxx 25 Trying xxxxxxxxxxxxxxx... Connected to xxxxxxxxxxxxxxx . Escape character is '^]'. 220 xxxxxxxxxxxxxxx ESMTP Postfix ehlo localhost 250-xxxxxxxxxxxxxxx 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH PLAIN LOGIN 250-AUTH=PLAIN LOGIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN STARTTLS 220 2.0.0 Ready to start TLS please help as i'm lost of places to look now. os is Ubuntu 10.4 and the SSL is a wildcard SSL, imap/pop and apache work flawlessly with the same certificate.

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • Restarting or stopping apache results in waiting forever

    - by steko
    I have two simple WSGI apps running on top of mod_wsgi and apache2 on a test development server. There is no mod_python on this machine. The WSGI configuration is as follows WSGIDaemonProcess tops stack-size=524288 maximum-requests=5 WSGIScriptAlias /tops /home/ubuntu/tops-cloud/tops.wsgi <Directory /home/ubuntu/tops-cloud> WSGIProcessGroup tops WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> WSGIDaemonProcess flaskal maximum-requests=5 WSGIScriptAlias /c14 /home/ubuntu/c14/flaskal/flaskal.wsgi <Directory /home/ubuntu/c14/flaskal> WSGIProcessGroup flaskal WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> If I make changes to the app, I need to restart the web server, so I would expect that a simple sudo service apache2 restart does what I need. Same goes for any changes to the config (e.g. number of maximum requests, etc). Instead, it never ends "waiting", like this: $ sudo service apache2 restart * Restarting web server apache2 ... waiting .................................................. until I just do CTRL-C. At that point, the only way to resume a working server is to kill the process and restart it, not very convenient. The same happens with the stop command. The error logs at the "debug" level show the following lines after a failed restart [Wed Nov 14 21:55:19 2012] [notice] caught SIGTERM, shutting down [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Shutdown requested 'tops'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Stopping process 'tops'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Destroying interpreters. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Cleanup interpreter ''. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Terminating Python. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Shutdown requested 'flaskal'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Stopping process 'flaskal'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Destroying interpreters. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Cleanup interpreter ''. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Terminating Python. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Python has shutdown. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Python has shutdown. If I then try to restart again (with the process still running), I get the following error: * Restarting web server apache2 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed. The Apache error log may have more information. Unfortunately the Apache error log doesn't have anything. When apache2 is running properly, both apps work without any problem.

    Read the article

  • ldap samba user access issue

    - by ancillary
    I have a samba share that is on the LAN. It is auth'd via ldap. Users access file system via ad windows shares. There are shortcuts in directories that point to dir's on samba. Typically a user will click the shortcut to the smb dir, and will be met with a permission denied error. Upon closing explorer and reopening, it will work. DNS is handled by the domain controller, and that is the only server any of the machines use for DNS. Nothing in eventvwr. Only see successful auth entries in samba log. Any ideas?

    Read the article

  • MongoDB on EC2 - Creating a replicaset across DCs

    - by ankitb
    we are trying to get a MongoDB setup in EC2 going. I had a few questions - Should we turn on auth since the MongoDB endpoint will have a public VIP? Any big hit on perf with auth enabled? Best way to deploy a replicaset in EC2? Do I have to deploy all 3 nodes individually and configure them or can I use a tool to automate the deployment? We would like one of the secondaries to be located in a different DC than the primary. Ubuntu or RHEL? And what version? Thanks!

    Read the article

  • Rsyslog : copy with change the facility

    - by Dom
    I have saslauthd with save the logs in LOG_AUTH in our rsyslogd server. It can't be changed without recompiling, and I don't want to do that. I would like to see all the LOG_AUTH in LOG_MAIL, because I do an export to an external machine, and I would like to see all the saslauthd logs in LOG_MAIL in the distant server. Of course, in local I can add "auth.* " in the mail.log file section, but the export will not be in the right file because I filter in export by syslog Facility/Priority. How can I export all the AUTH logs into MAIL logs ? Thanks

    Read the article

  • subversion issue on mac os x

    - by user32942
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • getting Error while set up the connection pool in jboss

    - by Yashwant Chavan
    Hi as per following Connection pool configuration facing some issue. Place a copy of mysql-connector-java-[version]-bin.jar in $JBOSS_HOME/server/all/lib. Then, follow the example configuration file named mysql-ds.xml in the $JBOSS_HOME/docs/examples/jca directory that comes with a JBoss binary installation. To activate your DataSource, place an xml file that follows the format of mysql-ds.xml in the deploy subdirectory in either $JBOSS_HOME/server/all, $JBOSS_HOME/server/default, or $JBOSS_HOME/server/[yourconfig] as appropriate. I am getting following error resource-ref: jdbc/buinessCaliberDb has no valid JNDI binding. Check the jboss-web/resource-ref. This is my mysql-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/buinessCaliberDb</jndi-name> <connection-url>jdbc:mysql:///BUSINESS</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password>password</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> --> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources> and this my web.xml entry <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/buinessCaliberDb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> and this jboss-web.xml entry <jboss-web> <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/buinessCaliberDb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </jboss-web> Please help

    Read the article

  • PHP setting cookies in a child class

    - by steve
    I am writing a custom session handler and for the life of me I cannot get a cookie to set in it. I'm not outputting anything to the browser before I set the cookie but it still doesn't work. Its killing me. The cookie will set if I set it in the script I define and call on the session handler with. If necessary I will post code. Any ideas people? <?php /* require the needed classes comment out what is not needed */ require_once("classes/sessionmanager.php"); require_once("classes/template.php"); require_once("classes/database.php"); $title=" "; //titlebar of the web browser $description=" "; $keywords=" "; //meta keywords $menutype="default"; //default or customer, customer is elevated $pagetitle="dflsfsf "; //title of the webpage $pagebody=" "; //body of the webpage $template=template::def_instance(); $database=database::def_instance(); $session=sessionmanager::def_instance(); $session->sessions(); session_start(); ?> and this is the one that actually sets the cookie for the session function write($session_id,$session_data) { $session_id = mysql_real_escape_string($session_id); $session_data = mysql_real_escape_string(serialize($session_data)); $expires = time() + 3600; $user_ip = $_SERVER['REMOTE_ADDR']; $bol = FALSE; $time = time(); $newsession = FALSE; $auth = FALSE; $query = "SELECT * FROM 'sessions' WHERE 'expires' > '$time'"; $sessions_result = $this->query($query); $newsession = $this->newsession_check($session_id,$sessions_result); while($sessions_array = mysql_fetch_array($sessions_result) AND $auth = FALSE) { $session_array = $this->strip($session_array); $auth = $this->auth_check($session_array,$session_id); } /* this is an authentic session. build queries and update it */ if($auth = TRUE AND $newsession = FALSE) { $session_data = mysql_real_escape_string($session_data); $update_query1 = "UPDATE 'sessions' SET 'user_ip' = '$user_ip' WHERE 'session_id' = '$session_id'"; $update_query2 = "UPDATE 'sessions' SET 'data' = '$session_data' WHERE 'session_id = '$session_id'"; $update_query3 = "UPDATE 'sessions' SET 'expires' = '$expires' WHERE 'session_id' = '$session_id'"; $this->query($update_query1); $this->query($update_query2); $this->query($update_query3); $bol = TRUE; } elseif($newsession = TRUE) { /* this is a new session, build and create it */ $random_number = $this->obtain_random(); $cookieval = hash("sha512",$random_number); setcookie("rndn",$cookieval); $query = "INSERT INTO sessions VALUES('$session_id','0','$user_ip','$random_number','$session_data','$expires')"; $this->query($query); //echo $cookieval."this is the cookie <<"; $bol = TRUE; } return $bol; }

    Read the article

  • Whitepaper: The Socially Enabled Enterprise

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Sharing the results of our new executive study, which explored the opportunities and challenges global organizations are facing in the transition to becoming socially enabled enterprises. Oracle, Leader Networks, and Social Media Today recently conducted an online survey of over 900 Marketing and IT executives to understand how companies are leveraging social technologies and practices throughout their organizations. Read Now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • Frederick .NET User Group April 2010 Meeting

    - by John Blumenauer
    FredNUG is pleased to announce that we have an excellent speaker lined up for April.  On April 20th, we’ll start with pizza and social networking at 6:30 PM.  Then, starting at 7 PM, Dane Morgridge will present “Getting Started with Entity Framework 4” The scheduled agenda is:   6:30 PM - 7:00 PM - Pizza/Social Networking/Announcements 7:00 PM - 8:30 PM - Main Topic: Getting Started with Entity Framework 4 with Dane Morgridge  Main Topic Description:  Getting Started with Entity Framework 4 With .Net 3.5 Microsoft release Linq to Sql and with .Net 3.5 SP1 came the Entity Framework, both powerful ORM tools leveraging Linq technology.   Entity Framework v1, while usable, was definitely lacking some important features and the Entity Framework team delivered with version 4 coming with Visual Studio 2010.  In this session we will look at Entity Framework 4 from the ground level and you will get a solid understanding of it basic principles.  We will also go through all of the new features in Entity Framework 4 and see how far it’s come since the initial release.  If you’ve never taken a look at Entity Framework, now is the time as version 4 is the real deal. Speaker Bio: Dane Morgridge has been a developer for 9+ years and has worked with .Net & C# since the first public beta. His current passions are Entity Framework, WPF, WCF, Silverlight and LINQ. He works mostly with C#, but is also a big fan of whatever new technology he happens to come across. In addition to software development, he is the host of the Community Megaphone Podcast and also enjoys dabbling in graphic design, video special effects and hockey. When not with his family he is usually learning some new technology or working on some side projects. He is currently working as the Development Manager & Architect at Roska Direct in Montgomeryville, PA.  He can be reached through is blog http://geekswithblogs.net/danemorgridge or on Twitter @danemorgridge.  8:30 PM - 8:45 PM – RAFFLE! Please join us and get involved in our .NET developers community!

    Read the article

  • HTML5 Form Validation

    - by Stephen.Walther
    The latest versions of Google Chrome (16+), Mozilla Firefox (8+), and Internet Explorer (10+) all support HTML5 client-side validation. It is time to take HTML5 validation seriously. The purpose of the blog post is to describe how you can take advantage of HTML5 client-side validation regardless of the type of application that you are building. You learn how to use the HTML5 validation attributes, how to perform custom validation using the JavaScript validation constraint API, and how to simulate HTML5 validation on older browsers by taking advantage of a jQuery plugin. Finally, we discuss the security issues related to using client-side validation. Using Client-Side Validation Attributes The HTML5 specification discusses several attributes which you can use with INPUT elements to perform client-side validation including the required, pattern, min, max, step, and maxlength attributes. For example, you use the required attribute to require a user to enter a value for an INPUT element. The following form demonstrates how you can make the firstName and lastName form fields required: <!DOCTYPE html> <html > <head> <title>Required Demo</title> </head> <body> <form> <label> First Name: <input required title="First Name is Required!" /> </label> <label> Last Name: <input required title="Last Name is Required!" /> </label> <button>Register</button> </form> </body> </html> If you attempt to submit this form without entering a value for firstName or lastName then you get the validation error message: Notice that the value of the title attribute is used to display the validation error message “First Name is Required!”. The title attribute does not work this way with the current version of Firefox. If you want to display a custom validation error message with Firefox then you need to include an x-moz-errormessage attribute like this: <input required title="First Name is Required!" x-moz-errormessage="First Name is Required!" /> The pattern attribute enables you to validate the value of an INPUT element against a regular expression. For example, the following form includes a social security number field which includes a pattern attribute: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Pattern</title> </head> <body> <form> <label> Social Security Number: <input required pattern="^d{3}-d{2}-d{4}$" title="###-##-####" /> </label> <button>Register</button> </form> </body> </html> The regular expression in the form above requires the social security number to match the pattern ###-##-####: Notice that the input field includes both a pattern and a required validation attribute. If you don’t enter a value then the regular expression is never triggered. You need to include the required attribute to force a user to enter a value and cause the value to be validated against the regular expression. Custom Validation You can take advantage of the HTML5 constraint validation API to perform custom validation. You can perform any custom validation that you need. The only requirement is that you write a JavaScript function. For example, when booking a hotel room, you might want to validate that the Arrival Date is in the future instead of the past: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Constraint Validation API</title> </head> <body> <form> <label> Arrival Date: <input id="arrivalDate" type="date" required /> </label> <button>Submit Reservation</button> </form> <script type="text/javascript"> var arrivalDate = document.getElementById("arrivalDate"); arrivalDate.addEventListener("input", function() { var value = new Date(arrivalDate.value); if (value < new Date()) { arrivalDate.setCustomValidity("Arrival date must be after now!"); } else { arrivalDate.setCustomValidity(""); } }); </script> </body> </html> The form above contains an input field named arrivalDate. Entering a value into the arrivalDate field triggers the input event. The JavaScript code adds an event listener for the input event and checks whether the date entered is greater than the current date. If validation fails then the validation error message “Arrival date must be after now!” is assigned to the arrivalDate input field by calling the setCustomValidity() method of the validation constraint API. Otherwise, the validation error message is cleared by calling setCustomValidity() with an empty string. HTML5 Validation and Older Browsers But what about older browsers? For example, what about Apple Safari and versions of Microsoft Internet Explorer older than Internet Explorer 10? What the world really needs is a jQuery plugin which provides backwards compatibility for the HTML5 validation attributes. If a browser supports the HTML5 validation attributes then the plugin would do nothing. Otherwise, the plugin would add support for the attributes. Unfortunately, as far as I know, this plugin does not exist. I have not been able to find any plugin which supports both the required and pattern attributes for older browsers, but does not get in the way of these attributes in the case of newer browsers. There are several jQuery plugins which provide partial support for the HTML5 validation attributes including: · jQuery Validation — http://docs.jquery.com/Plugins/Validation · html5Form — http://www.matiasmancini.com.ar/jquery-plugin-ajax-form-validation-html5.html · h5Validate — http://ericleads.com/h5validate/ The jQuery Validation plugin – the most popular JavaScript validation library – supports the HTML5 required attribute, but it does not support the HTML5 pattern attribute. Likewise, the html5Form plugin does not support the pattern attribute. The h5Validate plugin provides the best support for the HTML5 validation attributes. The following page illustrates how this plugin supports both the required and pattern attributes: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>h5Validate</title> <style type="text/css"> .validationError { border: solid 2px red; } .validationValid { border: solid 2px green; } </style> </head> <body> <form id="customerForm"> <label> First Name: <input id="firstName" required /> </label> <label> Social Security Number: <input id="ssn" required pattern="^d{3}-d{2}-d{4}$" title="Expected pattern is ###-##-####" /> </label> <input type="submit" /> </form> <script type="text/javascript" src="Scripts/jquery-1.4.4.min.js"></script> <script type="text/javascript" src="Scripts/jquery.h5validate.js"></script> <script type="text/javascript"> // Enable h5Validate plugin $("#customerForm").h5Validate({ errorClass: "validationError", validClass: "validationValid" }); // Prevent form submission when errors $("#customerForm").submit(function (evt) { if ($("#customerForm").h5Validate("allValid") === false) { evt.preventDefault(); } }); </script> </body> </html> When an input field fails validation, the validationError CSS class is applied to the field and the field appears with a red border. When an input field passes validation, the validationValid CSS class is applied to the field and the field appears with a green border. From the perspective of HTML5 validation, the h5Validate plugin is the best of the plugins. It adds support for the required and pattern attributes to browsers which do not natively support these attributes such as IE9. However, this plugin does not include everything in my wish list for a perfect HTML5 validation plugin. Here’s my wish list for the perfect back compat HTML5 validation plugin: 1. The plugin would disable itself when used with a browser which natively supports HTML5 validation attributes. The plugin should not be too greedy – it should not handle validation when a browser could do the work itself. 2. The plugin should simulate the same user interface for displaying validation error messages as the user interface displayed by browsers which natively support HTML5 validation. Chrome, Firefox, and Internet Explorer all display validation errors in a popup. The perfect plugin would also display a popup. 3. Finally, the plugin would add support for the setCustomValidity() method and the other methods of the HTML5 validation constraint API. That way, you could implement custom validation in a standards compatible way and you would know that it worked across all browsers both old and new. Security It would be irresponsible of me to end this blog post without mentioning the issue of security. It is important to remember that any client-side validation — including HTML5 validation — can be bypassed. You should use client-side validation with the intention to create a better user experience. Client validation is great for providing a user with immediate feedback when the user is in the process of completing a form. However, client-side validation cannot prevent an evil hacker from submitting unexpected form data to your web server. You should always enforce your validation rules on the server. The only way to ensure that a required field has a value is to verify that the required field has a value on the server. The HTML5 required attribute does not guarantee anything. Summary The goal of this blog post was to describe the support for validation contained in the HTML5 standard. You learned how to use both the required and the pattern attributes in an HTML5 form. We also discussed how you can implement custom validation by taking advantage of the setCustomValidity() method. Finally, I discussed the available jQuery plugins for adding support for the HTM5 validation attributes to older browsers. Unfortunately, I am unaware of any jQuery plugin which provides a perfect solution to the problem of backwards compatibility.

    Read the article

  • Il CRM è al passo con i tempi?

    - by antonella.buonagurio(at)oracle.com
    Il Social Customer Relationship Management è nato grazie alla rivoluzione portata dal Web 2.0, un cambiamento epocale nelle modalità di comunicazione che ha aggiunto una incredibile ricchezza alle conversazioni tra aziende e consumatori. Le aziende dispongono adesso di strumenti per comprendere il proprio mercato senza precedenti, i consumatori, a loro volta, hanno il potere di utilizzare nuovi canali per esprimere le proprie esigenze e per comunicare e condividere commenti ed esperienze. Ma il Web 2.0 non è il solo fattore che impatta sulle scelte strategiche in ambito CRM  che ogni azienda deve considerare per sostenere  questo nuovo rapporto con i propri consumatori.    Vuoi scoprire quali sono le forze (o fattori) che le aziende devono considerare affinchè i processi di gestione della relazione con i clienti stiano al passo con le mutate condizioni sociali ed economiche?   Per saperne di più:   Il whitepaper realizzato da Oracle, Paul Gillin ed  IT Business Edge  ne delinea alcuni: 1.      Il Business. Come è cambiato in funzione dell'esperienza multicanale ora possible, della centralità del cliente e dei social networking che dominano le relazioni on line? 2.      La tecnologiaLe aziende oggi per guadagnare vantaggio competitivo devono dotarsi delle più innovative tecnologie per dare maggior valore al proprio business e per ridurre al minimo i costi di infrastruttura. Quali sono e quali sono gli effettivi vantaggi?   e altri ancora ...... leggendo il white paper "Is your CRM solution keeping up with the times?"

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >