Search Results

Search found 3366 results on 135 pages for 'openvpn auth ldap'.

Page 53/135 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Certificate revocation check fails for non-domain guest in spite of accessible CRL

    - by 0xFE
    When we try to use certificates on computers that are not part of the domain, Windows complains that The revocation function was unable to check revocation because the revocation server was offline. However, if I manually open the certificate and check the CRL Distribution Point property, I see an ldap:/// URL and an http:// URL that points to externally-accessible IIS site that hosts the CRLs. Of course, the non-domain-joined client cannot access the ldap:/// URL, but it can download the CRL from the http:// link (at least in a browser). I enabled CAPI logging and I see the event that corresponds to this failed revocation check. The RevocationInfo section is: RevocationInfo [ freshnessTime] PT11H27M4S RevocationResult The revocation function was unable to check revocation because the revocation server was offline. [ value] 80092013 CertificateRevocationList [ location] UrlCache [ url] http://the correct URL [fileRef] 6E463C2583E17C63EF9EAC4EFBF2AEAFA04794EB.crl [issuerName] the name of the CA Furthermore, I can see the HTTP request to the correct URL and the server's response (HTTP 304 Not Modified) with Microsoft Network Monitor. I ran certutil -verify -urlfetch, and it seems to show the same thing: the computer recognizes both URLs, tries both, and even though the http:// link succeeds, returns the same error. Is there a way to have non-domain-joined clients skip the ldap:/// link and only check the http:// one? Edit: The ldap:/// URL is ldap:///CN=<name of CA>,CN=<name of server that is running the CA>,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=<domain name>?certificateRevocationList?base?objectClass=cRLDistributionPoint The non-domain-joined clients may be on the domain network or on an external network. The http:// CDP is accessible from the public internet.

    Read the article

  • What does ldapsearch response mean?

    - by Martijn Burger
    I created a ldap directory with a number of users and groups. When I query this directory from a remote server with: ldapsearch -H ldap://ldap.myserver.net/ -x -vvvvvvv -b dc=myserver,dc=net -D cn=admin,dc=myserver,dc=net -W I get all objects in the directory returned. The result finishes with the following: # search result search: 2 result: 0 Success # numResponses: 85 # numEntries: 84 What do these numbers mean exactly?

    Read the article

  • How to change the default domain controller when querying AD in a different site ?

    - by Linefeed
    We have 2 different locations, and at both site we have multiple domain controllers (Win2008). In our application we use Serverless Binding to execute our LDAP queries http://msdn.microsoft.com/en-us/library/ms677945(v=vs.85).aspx. If we look at de DnsHostName of the LDAP://RootDse on site B we always get the default domain controller of site A. Therefor all LDAP queries go much slower. Is there a way to change the default domain controller per site ?

    Read the article

  • Read access to Active Directory property (uSNChanged)

    - by Tom Ligda
    I have an issue with read access to the uSNChanged property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNChanged property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNChanged property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNChanged property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • The Importance of Fully Specifying a Problem

    - by Alan
    I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung. Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it's threads spinning on busy wait type locks). This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced. This prompted the question back to the customer: What exactly were you seeing that made you believe that the system was hung? It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing "root" and hitting return, the console was no longer responsive. This description puts a whole new light on the "hang". You immediately start thinking "name services". Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network. Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open. My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server. Moral When you log a service ticket for a "system hang", it's great to get the forced crashdump first up, but it's even better to get a description of what you observed to make to believe that the system was hung.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • mod_rewrite in conjunction with "options indexes"

    - by Travis
    I have a directory ("files") where sub-directories and files are going to be created and stored over time. The directories also need to deliver a directory listing, using "options indexes", but only if a user is authenticated, and authorized. I have that part built, and working, by doing the following: <Directory /var/www/html/files> Options Indexes IndexOptions FancyIndexing SuppressHTMLPreamble HeaderName /includes/autoindex/auth.php </Directory> Now I need to take care of file delivery. To force authentication for files, I have built the following: RewriteCond %{REQUEST_URI} -f RewriteRule /files/(.*) /auth.php I also tried: RewriteCond %{REQUEST_URI} !-d RewriteRule /files/(.*) /auth.php Both directives are redirecting to auth.php when I request: foo.com/files/bar/ foo.com/files/bar/baz I am outputting the SERVER global on auth.php during testing and it is showing the requests as I made them (I thought Apache may have been doing something behind the scenes by adding something like "index.html" to the end with "Options Indexes" being on). Ideas?

    Read the article

  • Pattern filters in Laravel 4

    - by ali A
    I want to make a pattern route, to redirect users to login page when they are not logged in. I searched but couldn't find a solution. as always Laravel's Documentation is useless! I have this in my filter.php Route::filter('auth', function() { if (Auth::guest()) return Redirect::guest('login'); }); Route::filter('auth.basic', function() { return Auth::basic(); }); And this route in my routes.php Route::when('/*', 'auth' ); but It's not working. How can I do that?

    Read the article

  • video and file caching with squid lusca?

    - by moon
    hello all i have configured squid lusca on ubuntu 11.04 version and also configured the video caching but the problem is the squid cannot configure the video more than 2 min long and the file of size upto 5.xx mbs only. here is my config please guide me how can i cache the long videos and files with squid: > # PORT and Transparent Option http_port 8080 transparent server_http11 on icp_port 0 > > # Cache Directory , modify it according to your system. > # but first create directory in root by mkdir /cache1 > # and then issue this command chown proxy:proxy /cache1 > # [for ubuntu user is proxy, in Fedora user is SQUID] > # I have set 500 MB for caching reserved just for caching , > # adjust it according to your need. > # My recommendation is to have one cache_dir per drive. zzz > > #store_dir_select_algorithm round-robin cache_dir aufs /cache1 500 16 256 cache_replacement_policy heap LFUDA memory_replacement_policy heap > LFUDA > > # If you want to enable DATE time n SQUID Logs,use following emulate_httpd_log on logformat squid %tl %6tr %>a %Ss/%03Hs %<st %rm > %ru %un %Sh/%<A %mt log_fqdn off > > # How much days to keep users access web logs > # You need to rotate your log files with a cron job. For example: > # 0 0 * * * /usr/local/squid/bin/squid -k rotate logfile_rotate 14 debug_options ALL,1 cache_access_log /var/log/squid/access.log > cache_log /var/log/squid/cache.log cache_store_log > /var/log/squid/store.log > > #I used DNSAMSQ service for fast dns resolving > #so install by using "apt-get install dnsmasq" first dns_nameservers 127.0.0.1 101.11.11.5 ftp_user anonymous@ ftp_list_width 32 ftp_passive on ftp_sanitycheck on > > #ACL Section acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl > to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 # https, snews > acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl > Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews > acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl > Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port > 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port > 591 # filemaker acl Safe_ports port 777 # multiling http acl > Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl > Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method > CONNECT http_access allow manager localhost http_access deny manager > http_access allow purge localhost http_access deny purge http_access > deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow > localhost http_access allow all http_reply_access allow all icp_access > allow all > > #========================== > # Administrative Parameters > #========================== > > # I used UBUNTU so user is proxy, in FEDORA you may use use squid cache_effective_user proxy cache_effective_group proxy cache_mgr > [email protected] visible_hostname proxy.aacable.net unique_hostname > [email protected] > > #============= > # ACCELERATOR > #============= half_closed_clients off quick_abort_min 0 KB quick_abort_max 0 KB vary_ignore_expire on reload_into_ims on log_fqdn > off memory_pools off > > # If you want to hide your proxy machine from being detected at various site use following via off > > #============================================ > # OPTIONS WHICH AFFECT THE CACHE SIZE / zaib > #============================================ > # If you have 4GB memory in Squid box, we will use formula of 1/3 > # You can adjust it according to your need. IF squid is taking too much of RAM > # Then decrease it to 128 MB or even less. > > cache_mem 256 MB minimum_object_size 512 bytes maximum_object_size 500 > MB maximum_object_size_in_memory 128 KB > > #============================================================$ > # SNMP , if you want to generate graphs for SQUID via MRTG > #============================================================$ > #acl snmppublic snmp_community gl > #snmp_port 3401 > #snmp_access allow snmppublic all > #snmp_access allow all > > #============================================================ > # ZPH , To enable cache content to be delivered at full lan speed, > # To bypass the queue at MT. > #============================================================ tcp_outgoing_tos 0x30 all zph_mode tos zph_local 0x30 zph_parent 0 > zph_option 136 > > # Caching Youtube acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.com\/videoplayback \.youtube\.com\/videoplay > \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.[a-z][a-z]\/videoplayback \.youtube\.[a-z][a-z]\/videoplay > \.youtube\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i \.googlevideo\.com\/videoplayback \.googlevideo\.com\/videoplay \.googlevideo\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.com\/videoplayback \.google\.com\/videoplay > \.google\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.[a-z][a-z]\/videoplayback \.google\.[a-z][a-z]\/videoplay > \.google\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/ acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/ acl > videocache_allow_url url_regex -i > [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv acl > videocache_allow_url url_regex -i \.vimeo\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i > va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]? acl videocache_allow_url > url_regex -i \.youporn\.com\/(.*)\.flv acl videocache_allow_url > url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv acl > videocache_allow_url url_regex -i \.tube8\.com\/(.*)\.(flv|3gp) acl > videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv acl > videocache_allow_url url_regex -i > \.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i > \.apniisp\.com\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i \.break\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i redtube\.com\/(.*)\.flv acl > videocache_allow_dom dstdomain .mccont.com .metacafe.com > .cdn.dailymotion.com acl videocache_deny_dom dstdomain > .download.youporn.com .static.blip.tv acl dontrewrite url_regex > redbot\.org \.php acl getmethod method GET > > storeurl_access deny dontrewrite storeurl_access deny !getmethod > storeurl_access deny videocache_deny_dom storeurl_access allow > videocache_allow_url storeurl_access allow videocache_allow_dom > storeurl_access deny all > > storeurl_rewrite_program /etc/squid/storeurl.pl > storeurl_rewrite_children 7 storeurl_rewrite_concurrency 10 > > acl store_rewrite_list urlpath_regex -i > \/(get_video\?|videodownload\?|videoplayback.*id) acl > store_rewrite_list urlpath_regex -i \.flv$ \.mp3$ \.mp4$ \.swf$ \ > storeurl_access allow store_rewrite_list storeurl_access deny all > > refresh_pattern -i \.flv$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.mp3$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.mp4$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.swf$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.gif$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.jpg$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.jpeg$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.exe$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth > > # 1 year = 525600 mins, 1 month = 10080 mins, 1 day = 1440 refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern > (get_video\?|videoplayback\?id|videoplayback.*id|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern \.(ico|video-stats) > 10080 80% 10080 override-expire ignore-reload ignore-no-cache > ignore-private ignore-auth override-lastmod negative-ttl=10080 > refresh_pattern \.etology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern galleries\.video(\?|sz) 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern brazzers\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern \.adtology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern > ^.*(utm\.gif|ads\?|rmxads\.com|ad\.z5x\.net|bh\.contextweb\.com|bstats\.adbrite\.com|a1\.interclick\.com|ad\.trafficmp\.com|ads\.cubics\.com|ad\.xtendmedia\.com|\.googlesyndication\.com|advertising\.com|yieldmanager|game-advertising\.com|pixel\.quantserve\.com|adperium\.com|doubleclick\.net|adserving\.cpxinteractive\.com|syndication\.com|media.fastclick.net).* > 10080 20% 10080 ignore-no-cache ignore-private override-expire > ignore-reload ignore-auth negative-ttl=40320 max-stale=10 > refresh_pattern ^.*safebrowsing.*google 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth negative-ttl=10080 refresh_pattern > ^http://((cbk|mt|khm|mlt)[0-9]?)\.google\.co(m|\.uk) 10080 80% > 10080 override-expire ignore-reload ignore-private negative-ttl=10080 > refresh_pattern ytimg\.com.*\.jpg > 10080 80% 10080 override-expire ignore-reload refresh_pattern > images\.friendster\.com.*\.(png|gif) 10080 80% > 10080 override-expire ignore-reload refresh_pattern garena\.com > 10080 80% 10080 override-expire reload-into-ims refresh_pattern > photobucket.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 override-expire ignore-reload refresh_pattern > vid\.akm\.dailymotion\.com.*\.on2\? 10080 80% > 10080 ignore-no-cache override-expire override-lastmod refresh_pattern > mediafire.com\/images.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 reload-into-ims override-expire ignore-private refresh_pattern > ^http:\/\/images|pics|thumbs[0-9]\. 10080 80% > 10080 reload-into-ims ignore-no-cache ignore-reload override-expire > refresh_pattern ^http:\/\/www.onemanga.com.*\/ > 10080 80% 10080 reload-into-ims ignore-no-cache ignore-reload > override-expire refresh_pattern > ^http://v\.okezone\.com/get_video\/([a-zA-Z0-9]) 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth override-lastmod negative-ttl=10080 > > #images facebook refresh_pattern -i \.facebook.com.*\.(jpg|png|gif) 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern -i \.fbcdn.net.*\.(jpg|gif|png|swf|mp3) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern static\.ak\.fbcdn\.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern ^http:\/\/profile\.ak\.fbcdn.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > > #All File refresh_pattern -i \.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(rar|jar|gz|tgz|bz2|iso|m1v|m2(v|p)|mo(d|v)|arj|lha|lzh|zip|tar) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|swf|dat|ad|txt|dll) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(avi|ac4|mp(e?g|a|e|1|2|3|4)|mk(a|v)|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|vob) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(pp(t?x)|s|t)|pdf|rtf|wax|wm(a|v)|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|flv|x-flv) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims > > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern ^gopher: > 1440 0% 1440 refresh_pattern ^ftp: 10080 95% 10080 > override-lastmod reload-into-ims refresh_pattern . 1440 > 95% 10080 override-lastmod reload-into-ims

    Read the article

  • Login URL using authentication information in Django

    - by fuSi0N
    I'm working on a platform for online labs registration for my university. Login View [project views.py] from django.http import HttpResponse, HttpResponseRedirect, Http404 from django.shortcuts import render_to_response from django.template import RequestContext from django.contrib import auth def index(request): return render_to_response('index.html', {}, context_instance = RequestContext(request)) def login(request): if request.method == "POST": post = request.POST.copy() if post.has_key('username') and post.has_key('password'): usr = post['username'] pwd = post['password'] user = auth.authenticate(username=usr, password=pwd) if user is not None and user.is_active: auth.login(request, user) if user.get_profile().is_teacher: return HttpResponseRedirect('/teachers/'+user.username+'/') else: return HttpResponseRedirect('/students/'+user.username+'/') else: return render_to_response('index.html', {'msg': 'You don\'t belong here.'}, context_instance = RequestContext(request) return render_to_response('login.html', {}, context_instance = RequestContext(request)) def logout(request): auth.logout(request) return render_to_response('index.html', {}, context_instance = RequestContext(request)) URLS #========== PROJECT URLS ==========# urlpatterns = patterns('', (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT }), (r'^admin/', include(admin.site.urls)), (r'^teachers/', include('diogenis.teachers.urls')), (r'^students/', include('diogenis.students.urls')), (r'^login/', login), (r'^logout/', logout), (r'^$', index), ) #========== TEACHERS APP URLS ==========# urlpatterns = patterns('', (r'^(?P<username>\w{0,50})/', labs), ) The login view basically checks whether the logged in user is_teacher [UserProfile attribute via get_profile()] and redirects the user to his profile. Labs View [teachers app views.py] from django.http import HttpResponse, HttpResponseRedirect, Http404 from django.shortcuts import render_to_response from django.template import RequestContext from django.contrib.auth.decorators import user_passes_test from django.contrib.auth.models import User from accounts.models import * from labs.models import * def user_is_teacher(user): return user.is_authenticated() and user.get_profile().is_teacher @user_passes_test(user_is_teacher, login_url="/login/") def labs(request, username): q1 = User.objects.get(username=username) q2 = u'%s %s' % (q1.last_name, q1.first_name) q2 = Teacher.objects.get(name=q2) results = TeacherToLab.objects.filter(teacher=q2) return render_to_response('teachers/labs.html', {'results': results}, context_instance = RequestContext(request)) I'm using @user_passes_test decorator for checking whether the authenticated user has the permission to use this view [labs view]. The problem I'm having with the current logic is that once Django authenticates a teacher user he has access to all teachers profiles basically by typing the teachers username in the url. Once a teacher finds a co-worker's username he has direct access to his data. Any suggestions would be much appreciated.

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • 401 Unauthorized returned on GET request (https) with correct credentials

    - by Johnny Grass
    I am trying to login to my web app using HttpWebRequest but I keep getting the following error: System.Net.WebException: The remote server returned an error: (401) Unauthorized. Fiddler has the following output: Result Protocol Host URL 200 HTTP CONNECT mysite.com:443 302 HTTPS mysite.com /auth 401 HTTP mysite.com /auth This is what I'm doing: // to ignore SSL certificate errors public bool AcceptAllCertifications(object sender, System.Security.Cryptography.X509Certificates.X509Certificate certification, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors sslPolicyErrors) { return true; } try { // request Uri uri = new Uri("https://mysite.com/auth"); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri) as HttpWebRequest; request.Accept = "application/xml"; // authentication string user = "user"; string pwd = "secret"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); request.Headers.Add("Authorization", auth); ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications); // response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); // Cleanup reader.Close(); dataStream.Close(); response.Close(); } catch (WebException webEx) { Console.Write(webEx.ToString()); } I am able to log in to the same site with no problem using ASIHTTPRequest in a Mac app like this: NSURL *login_url = [NSURL URLWithString:@"https://mysite.com/auth"]; ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:login_url]; [request setDelegate:self]; [request setUsername:name]; [request setPassword:pwd]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Accept" value:@"application/xml"]; [request startAsynchronous];

    Read the article

  • Basic user authentication with records in AngularFire

    - by ajkochanowicz
    Having spent literally days trying the different, various recommended ways to do this, I've landed on what I think is the most simple and promising. Also thanks to the kind gents from this SO question: Get the index ID of an item in Firebase AngularFire Curent setup Users can log in with email and social networks, so when they create a record, it saves the userId as a sort of foreign key. Good so far. But I want to create a rule so twitter2934392 cannot read facebook63203497's records. Off to the security panel Match the IDs on the backend Unfortunately, the docs are inconsistent with the method from is firebase user id unique per provider (facebook, twitter, password) which suggest appending the social network to the ID. The docs expect you to create a different rule for each of the login method's ids. Why anyone using 1 login method would want to do that is beyond me. (From: https://www.firebase.com/docs/security/rule-expressions/auth.html) So I'll try to match the concatenated auth.provider with auth.id to the record in userId for the respective registry item. According to the API, this should be as easy as In my case using $registry instead of $user of course. { "rules": { ".read": true, ".write": true, "registry": { "$registry": { ".read": "$registry == auth.id" } } } } But that won't work, because (see the first image above), AngularFire sets each record under an index value. In the image above, it's 0. Here's where things get complicated. Also, I can't test anything in the simulator, as I cannot edit {some: 'json'} To even authenticate. The input box rejects any input. My best guess is the following. { "rules": { ".write": true, "registry": { "$registry": { ".read": "data.child('userId').val() == (auth.provider + auth.id)" } } } } Which both throws authentication errors and simultaneously grants full read access to all users. I'm losing my mind. What am I supposed to do here?

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Unable to download torrents when using a VPN

    - by chad
    I am running Ubuntu 13.04 and using OpenVPN and vpnbook. When I am using a VPN I am unable to download torrents. I have tried it on 3 different torrent clients (qBittorrent, Deluge, and Transmission). Deluge just says "Checking" and never begins downloading. qBittorrent says "stalled" next to the torrent and Transmission does not say anything and just doesn't download. Is there some network setting I am missing or some OpenVPN config I need to do?

    Read the article

  • LINQ to Twitter v2.1.09 Released

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/10/15/linq-to-twitter-v2.1.09-released.aspxToday, I released LINQ to Twitter v2.1.09. Here are important new changes. Bug Fixes This is primarily a bug fix release. Most notably, there were authentication problems in WinRT apps. This is now fixed. New Features One new feature is the addition of ApplicationOnlyAuthentication for WinRT. It is fully async.  Here’s how it works: var auth = new WinRtApplicationOnlyAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" } }; if (auth == null || !auth.IsAuthorized) { await auth.AuthorizeAsync(); } var twitterCtx = new TwitterContext(auth); (from search in twitterCtx.Search where search.Type == SearchType.Search && search.Query == SearchTextBox.Text select search) .MaterializedAsyncCallback( async response => await Dispatcher.RunAsync( CoreDispatcherPriority.Normal, async () => { Search searchResponse = response.State.Single(); string message = string.Format( "Search returned {0} statuses", searchResponse.Statuses.Count); await new MessageDialog(message, "Search Complete").ShowAsync(); })); It’s called the WinRtApplicationOnlyAuthorizer. You only need two tokens, ConsumerKey and ConsumerSecret, which come from your Twitter API application settings page. Note: You need a Twitter Application, which you can create at https://dev.twitter.com/. The MaterializedAsyncCallback materializes your query and handles the response. I put everything together in a lambda for demonstration purposes, but you can always replace the callback with a handler of type Action<TwitterAsyncResponse<IEnumerable<T>>>, where T is Search for this example. On the Horizon The next version of LINQ to Twitter is in development. I discussed it at LINQ to Twitter Async. This isn’t complete, but you can download the source code at the LINQ to Twitter site on CodePlex. I’ve competed all the spikes for what I thought would be the hard parts and now have prototypes of queries and commands working. This would be a good time to provide feedback if there are features in the current version that you think could be improved. The current driving forces for the next version will be async and PCL.   @JoeMayo

    Read the article

  • Apache HttpClient Digest authentication

    - by Milan Jovic
    Hi, Basically what I need to do is to perform digest authentication. First thing I tried is the official example available here. But when I try to execute it(with some small changes, Post instead of the the Get method) I get a org.apache.http.auth.MalformedChallengeException: missing nonce in challange at org.apache.http.impl.auth.DigestScheme.processChallenge(DigestScheme.java:132) When this failed I tried using: DefaultHttpClient client = new DefaultHttpClient(); client.getCredentialsProvider().setCredentials(new AuthScope(null, -1, null), new UsernamePasswordCredentials("<username>", "<password>")); HttpPost post = new HttpPost(URI.create("http://<someaddress>")); List<NameValuePair> nvps = new ArrayList<NameValuePair>(); nvps.add(new BasicNameValuePair("domain", "<username>")); post.setEntity(new UrlEncodedFormEntity(nvps, HTTP.UTF_8)); DigestScheme digestAuth = new DigestScheme(); digestAuth.overrideParamter("algorithm", "MD5"); digestAuth.overrideParamter("realm", "http://<someaddress>"); digestAuth.overrideParamter("nonce", Long.toString(new Random().nextLong(), 36)); digestAuth.overrideParamter("qop", "auth"); digestAuth.overrideParamter("nc", "0"); digestAuth.overrideParamter("cnonce", DigestScheme.createCnonce()); Header auth = digestAuth.authenticate(new UsernamePasswordCredentials("<username>", "<password>"), post); System.out.println(auth.getName()); System.out.println(auth.getValue()); post.setHeader(auth); HttpResponse ret = client.execute(post); ByteArrayOutputStream v2 = new ByteArrayOutputStream(); ret.getEntity().writeTo(v2); System.out.println("----------------------------------------"); System.out.println(v2.toString()); System.out.println("----------------------------------------"); System.out.println(ret.getStatusLine().getReasonPhrase()); System.out.println(ret.getStatusLine().getStatusCode()); At first I have only overridden "realm" and "nonce" DigestScheme parameters. But it turned out that PHP script running on the server requires all other params, but no matter if I specify them or not DigestScheme doesn't generate them when I call its authenticate() method. I've been struggling with this for two days, and no luck. Based on everything I think that the cause of the problem is the PHP script. It looks to me that it doesn't send a challenge when app tries to access it unauthorized. Any ideas anyone?

    Read the article

  • Securing an ADF Application using OES11g: Part 2

    - by user12587121
    To validate the integration with OES we need a sample ADF Application that is rich enough to allow us to test securing the various ADF elements.  To achieve this we can add some items including bounded task flows to the application developed in this tutorial. A sample JDeveloper 11.1.1.6 project is available here. It depends on the Fusion Order Demo (FOD) database schema which is easily created using the FOD build scripts.In the deployment we have chosen to enable only ADF Authentication as we will delegate Authorization, mostly, to OES.The welcome page of the application with all the links exposed looks as follows: The Welcome, Browse Products, Browse Stock and System Administration links go to pages while the Supplier Registration and Update Stock are bounded task flows.  The Login link goes to a basic login page and once logged in a link is presented that goes to a logout page.  Only the Browse Products and Browse Stock pages are really connected to the database--the other pages and task flows do not really perform any operations on the database. Required Security Policies We make use of a set of test users and roles as decscribed on the welcome page of the application.  In order to exercise the different authorization possibilities we would like to enforce the following sample policies: Anonymous users can see the Login, Welcome and Supplier Registration links. They can also see the Welcome page, the Login page and follow the Supplier Registration task flow.  They can see the icon adjacent to the Login link indicating whether they have logged in or not. Authenticated users can see the Browse Product page. Only staff granted the right can see the Browse Product page cost price value returned from the database and then only if the value is below a configurable limit. Suppliers and staff can see the Browse Stock links and pages.  Customers cannot. Suppliers can see the Update Stock link but only those with the update permission are allowed to follow the task flow that it launches.  We could hide the link but leave it exposed here so we can easily demonstrate the method call activity protecting the task flow. Only staff granted the right can see the System Administration link and the System Administration page it accesses. Implementing the required policies In order to secure the application we will make use of the following techniques: EL Expressions and Java backing beans: JSF has the notion of EL expressions to reference data from backing Java classes.  We use these to control the presentation of links on the navigation page which respect the security contraints.  So a user will not see links that he is not allowed to click on into. These Java backing beans can call on to OES for an authorization decision.  Important Note: naturally we would configure the WLS domain where our ADF application is running as an OES WLS SM, which would allow us to efficiently query OES over the PEP API.  However versioning conflicts between OES 11.1.1.5 and ADF 11.1.1.6 mean that this is not possible.  Nevertheless, we can make use of the OES RESTful gateway technique from this posting in order to call into OES. You can easily create and manage backing beans in Jdeveloper as follows: Custom ADF Phase Listener: ADF extends the JSF page lifecycle flow and allows one to hook into the flow to intercept page rendering.  We use this to put a check prior to rendering any protected pages, again calling on to OES via the backing bean.  Phase listeners are configured in the adf-settings.xml file.  See the MyPageListener.java class in the project.  Here, for example,  is the code we use in the listener to check for allowed access to the sysadmin page, navigating back to the welcome page if authorization is not granted:                         if (page != null && (page.equals("/system.jspx") || page.equals("/system"))){                             System.out.println("MyPageListener: Checking Authorization for /system");                             if (getValue("#{oesBackingBean.UIAccessSysAdmin}").toString().equals("false") ){                                   System.out.println("MyPageListener: Forcing navigation away from system" +                                       "to welcome");                                 NavigationHandler nh = fc.getApplication().getNavigationHandler();                                   nh.handleNavigation(fc, null, "welcome");                               } else {                                 System.out.println("MyPageListener: access allowed");                              }                         } Method call activity: our app makes use of bounded task flows to implement the sequence of pages that update the stock or allow suppliers to self register.  ADF takes care of ensuring that a bounded task flow can be entered by only one page.  So a way to protect all those pages is to make a call to OES in the first activity and then either exit the task flow or continue depending on the authorization decision.  The method call returns a String which contains the name of the transition to effect. This is where we configure the method call activity in JDeveloper: We implement each of the policies using the above techniques as follows: Policies 1 and 2: as these policies concern the coarse grained notions of controlling access to anonymous and authenticated users we can make use of the container’s security constraints which can be defined in the web.xml file.  The allPages constraint is added automatically when we configure Authentication for the ADF application.  We have added the “anonymousss” constraint to allow access to the the required pages, task flows and icons: <security-constraint>    <web-resource-collection>      <web-resource-name>anonymousss</web-resource-name>      <url-pattern>/faces/welcome</url-pattern>      <url-pattern>/afr/*</url-pattern>      <url-pattern>/adf/*</url-pattern>      <url-pattern>/key.png</url-pattern>      <url-pattern>/faces/supplier-reg-btf/*</url-pattern>      <url-pattern>/faces/supplier_register_complete</url-pattern>    </web-resource-collection>  </security-constraint> Policy 3: we can place an EL expression on the element representing the cost price on the products.jspx page: #{oesBackingBean.dataAccessCostPrice}. This EL Expression references a method in a Java backing bean that will call on to OES for an authorization decision.  In OES we model the authorization requirement by requiring the view permission on the resource /MyADFApp/data/costprice and granting it only to the staff application role.  We recover any obligations to determine the limit.  Policy 4: is implemented by putting an EL expression on the Browse Stock link #{oesBackingBean.UIAccessBrowseStock} which checks for the view permission on the /MyADFApp/ui/stock resource. The stock.jspx page is protected by checking for the same permission in a custom phase listener—if the required permission is not satisfied then we force navigation back to the welcome page. Policy 5: the Update Stock link is protected with the same EL expression as the Browse Link: #{oesBackingBean.UIAccessBrowseStock}.  However the Update Stock link launches a bounded task flow and to protect it the first activity in the flow is a method call activity which will execute an EL expression #{oesBackingBean.isUIAccessSupplierUpdateTransition}  to check for the update permission on the /MyADFApp/ui/stock resource and either transition to the next step in the flow or terminate the flow with an authorization error. Policy 6: the System Administration link is protected with an EL Expression #{oesBackingBean.UIAccessSysAdmin} that checks for view access on the /MyADF/ui/sysadmin resource.  The system page is protected in the same way at the stock page—the custom phase listener checks for the same permission that protects the link and if not satisfied we navigate back to the welcome page. Testing the Application To test the application: deploy the OES11g Admin to a WLS domain deploy the OES gateway in a another domain configured to be a WLS SM. You must ensure that the jps-config.xml file therein is configured to allow access to the identity store, otherwise the gateway will not b eable to resolve the principals for the requested users.  To do this ensure that the following elements appear in the jps-config.xml file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> download the sample application and change the URL to the gateway in the MyADFApp OESBackingBean code to point to the OES Gateway and deploy the application to an 11.1.1.6 WLS domain that has been extended with the ADF JRF files. You will need to configure the FOD database connection to point your database which contains the FOD schema. populate the OES Admin and OES Gateway WLS LDAP stores with the sample set of users and groups.  If  you have configured the WLS domains to point to the same LDAP then it would only have to be done once.  To help with this there is a directory called ldap_scripts in the sample project with ldif files for the test users and groups. start the OES Admin console and configure the required OES authorization policies for the MyADFApp application and push them to the WLS SM containing the OES Gateway. Login to the MyADFApp as each of the users described on the login page to test that the security policy is correct. You will see informative logging from the OES Gateway and the ADF application to their respective WLS consoles. Congratulations, you may now login to the OES Admin console and change policies that will control the behaviour of your ADF application--change the limit value in the obligation for the cost price for example, or define Role Mapping policies to determine staff access to the system administration page based on user profile attributes. ADF Development Notes Some notes on ADF development which are probably typical gotchas: May need this on WLS startup in order to allow us to overwrite credentials for the database, the signal here is that there is an error trying to access the data base: -Djps.app.credential.overwrite.allowed=true Best to call Bounded Task flows via a CommandLink (as opposed to a go link) as you cannot seem to start them again from a go link, even having completed the task flow correctly with a return activity. Once a bounded task flow (BTF) is initated it must complete correctly  via a return activity—attempting to click on any other link whilst in the context of a  BTF has no effect.  See here for example: When using the ADF Authentication only security approach it seems to be awkward to allow anonymous access to the welcome and registration pages.  We can achieve anonymous access using the web.xml security constraint shown above (where no auth-constraint is specified) however it is not clear what needs to be listed in there….for example the /afr/* and /adf/* are in there by trial and error as sometimes the welcome page will not render if we omit those items.  I was not able to use the default allPages constraint with for example the anonymous-role or the everyone WLS group in order to be able to allow anonymous access to pages. The ADF security best practice advises placing all pages under the public_html/WEB-INF folder as then ADF will not allow any direct access to the .jspx pages but will only allow acces via a link of the form /faces/welcome rather than /faces/welcome.jspx.  This seems like a very good practice to follow as having multiple entry points to data is a source of confusion in a web application (particulary from a security point of view). In Authentication+Authorization mode only pages with a Page definition file are protected.  In order to add an emty one right click on the page and choose Go to Page Definition.  This will create an empty page definition and now the page will require explicit permission to be seen. It is advisable to give a unique context root via the weblogic.xml for the application, as otherwise the application will clash with any other application with the same context root and it will not deploy

    Read the article

  • imapsync - Authentication failed

    - by Touff
    I've deployed many Google Apps accounts and have used imapsync a number of times to migrate accounts to Google Apps. This time however, no matter what I try imapsync refuses to work claiming my credentials are incorrect - I've checked them time and time again and they are 100% correct. On Ubuntu 12, built from source, my command is: imapsync --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Full output from the command: get options: [1] PID is 21316 $RCSfile: imapsync,v $ $Revision: 1.592 $ $Date: With perl 5.14.2 Mail::IMAPClient 3.35 Command line used: /usr/bin/imapsync --debug --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Temp directory is /tmp PID file is /tmp/imapsync.pid Modules version list: Mail::IMAPClient 3.35 IO::Socket 1.32 IO::Socket::IP ? IO::Socket::INET 1.31 IO::Socket::SSL 1.53 Net::SSLeay 1.42 Digest::MD5 2.51 Digest::HMAC_MD5 1.01 Digest::HMAC_SHA1 1.03 Term::ReadKey 2.30 Authen::NTLM 1.09 File::Spec 3.33 Time::HiRes 1.972101 URI::Escape 3.31 Data::Uniqid 0.12 IMAPClient 3.35 Info: turned ON syncinternaldates, will set the internal dates (arrival dates) on host2 same as host1. Info: will try to use LOGIN authentication on host1 Info: will try to use PLAIN authentication on host2 Info: imap connexions timeout is 120 seconds Host1: IMAP server [SERVER1] port [993] user [USER1] Host2: IMAP server [imap.gmail.com] port [993] user [USER2] Host1: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. Host1: SERVER1 says it has CAPABILITY for AUTHENTICATE LOGIN Host1: success login on [SERVER1] with user [USER1] auth [LOGIN] Host2: * OK Gimap ready for requests from MY-VPS Host2: imap.gmail.com says it has CAPABILITY for AUTHENTICATE PLAIN Failure: error login on [imap.gmail.com] with user [USER2] auth [PLAIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) I have tried -authmech2 LOGIN as well which returns: Host2: imap.gmail.com says it has NO CAPABILITY for AUTHENTICATE LOGIN Failure: error login on [imap.gmail.com] with user [[email protected]] auth [LOGIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) If anyone can shed some light on this I would greatly appreciate it.

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • Squid external_acl_type Cannot run process

    - by Alex Rezistorman
    I want to restrict uploading for group of the users via squid. So I've choosen to use external_acl_type but after reload of the squid it returns error. WARNING: Cannot run '/usr/local/etc/squid/lists/newupload.sh' process. Permissions of newupload.sh and squid are the same. newupload.sh is executive. How can I solve this problem? Thnx in advance. newupload.sh #!/bin/sh while read line; do set -- $line length=$1 limit=$2 if [ -z "$length" ] || [ "$length" -le "$2" ]; then echo OK else echo ERR fi done Strings from squid.conf external_acl_type request_body protocol=2.5 %{Content-Lenght} /usr/local/etc/squid/lists/newupload.sh acl request_max_size external request_body 5000 http_access allow users request_max_size Squid version squid -v Squid Cache: Version 3.2.13 configure options: '--with-default-user=squid' '--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var' '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' '--with-swapdir=/var/squid/cache/squid' '--enable-auth' '--enable-build-info' '--enable-loadable-modules' '--enable-removal-policies=lru heap' '--disable-epoll' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-translation' '--enable-auth-basic=PAM' '--disable-auth-digest' '--enable-external-acl-helpers= kerberos_ldap_group' '--enable-auth-negotiate=kerberos' '--disable-auth-ntlm' '--without-pthreads' '--enable-storeio=diskd ufs' '--enable-disk-io=AIO Blocking DiskDaemon IpcIo Mmapped' '--enable-log-daemon-helpers=file' '--disable-url-rewrite-helpers' '--disable-ipv6' '--disable-snmp' '--disable-htcp' '--disable-forw-via-db' '--disable-cache-digests' '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups' '--disable-eui' '--disable-ipfw-transparent' '--disable-pf-transparent' '--disable-ipf-transparent' '--disable-follow-x-forwarded-for' '--disable-ecap' '--disable-icap-client' '--disable-esi' '--enable-kqueue' '--with-large-files' '--enable-cachemgr-hostname=proxy.adir.vbr.ua' '--with-filedescriptors=131072' '--disable-auto-locale' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 'CC=cc' 'CFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'LDFLAGS= -L/usr/local/lib' 'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'CPP=cpp' --enable-ltdl-convenience Related post: Restrict uploading for groups in squid http://squid-web-proxy-cache.1019090.n4.nabble.com/flexible-managing-of-request-body-max-size-with-squid-2-5-STABLE12-td1022653.html

    Read the article

  • local msmtp and ovh hosting

    - by klez
    I have my personal email hosted on OVH (personal hosting plan) and I'm not able to send mails using msmtp. Here's a typical session ignoring system configuration file /etc/msmtprc: File o directory non esistente loaded user configuration file /home/klez/.msmtprc using account default from /home/klez/.msmtprc host = ssl0.ovh.net port = 465 timeout = off protocol = smtp domain = localhost auth = choose user = federicoculloca%xxxxxxx password = * ntlmdomain = (not set) tls = on tls_starttls = off tls_trust_file = (not set) tls_crl_file = (not set) tls_fingerprint = (not set) tls_key_file = (not set) tls_cert_file = (not set) tls_certcheck = off tls_force_sslv3 = off tls_min_dh_prime_bits = (not set) tls_priorities = (not set) auto_from = off maildomain = (not set) from = federicoculloca@xxxxxxxx dsn_notify = (not set) dsn_return = (not set) keepbcc = off logfile = (not set) syslog = (not set) reading recipients from the command line TLS certificate information: Owner: Common Name: ssl0.ovh.net Organizational unit: Domain Control Validated Issuer: Common Name: OVH Secure Certification Authority Organization: OVH SAS Organizational unit: Low Assurance Country: FR Validity: Activation time: lun 31 gen 2011 01:00:00 CET Expiration time: mer 15 feb 2012 00:59:59 CET Fingerprints: SHA1: F9:DC:41:F9:A2:38:51:9B:56:E4:98:E6:CD:81:31:42:E6:0E:26:6D MD5: FC:EC:F3:8F:28:E4:7E:28:99:89:E6:BB:C9:DF:71:CE <-- 220 ns0.ovh.net ssl0.ovh.net. You connect to mail427.ha.ovh.net ESMTP --> EHLO localhost <-- 250-ssl0.ovh.net. You connect to mail427.ha.ovh.net <-- 250-AUTH LOGIN PLAIN <-- 250-AUTH=LOGIN PLAIN <-- 250-PIPELINING <-- 250-8BITMIME <-- 250 SIZE 109000000 --> AUTH PLAIN xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx <-- 235 ok, go ahead (#2.0.0) --> MAIL FROM:<federicoculloca@xxxxx> --> RCPT TO:<[email protected]> --> DATA <-- 250 ok <-- 250 ok <-- 354 go ahead --> hello world --> . <-- 554 mail server permanently rejected message (#5.3.0) And my configuration # ~/.msmtp # Mostly from Peter Garrett's examples # https://lists.ubuntu.com/archives/ubuntu-users/2007-September/122698.html # Accounts from Scott Robbins' `A Quick Guide to Mutt' # http://home.nyc.rr.com/computertaijutsu/mutt.html account xxxxx host ssl0.ovh.net from federicoculloca@xxxxxx auth on user federicoculloca%xxxxxx password xxxxxx tls on tls_certcheck off tls_starttls off Any idea?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >