Search Results

Search found 5099 results on 204 pages for 'distribution groups'.

Page 38/204 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Grant a user access to directories shared by root (mod: 770)

    - by Paul Dinham
    I want to grant a user (username: paul) access to all directories shared by root with mod 770. I do it this way: groups root (here comes a list of groups in which root user is) usermod -a -G group1 paul usermod -a -G group2 paul usermod -a -G group3 paul ... All the 'group1', 'group2', 'group3' are seen in the group list of root user. However, after adding 'paul' to all groups above, he still can not write to directories shared by root user with mod 770. Did I do it wrongly?

    Read the article

  • Remote installation and configuration software, preferrably open source

    - by Brimstedt
    Hello, Im looking for some remote-installation software. I've looked briefly at unattended, opsi and a bunch of stuff, but there is nowhere near enough time to evaluate them all, and they are rather complex to setup so some insight would be very appreciated. This is foremost for windows clients, but linux support would be good. Something like apt-get would've been great. Requirements: Simple to setup and use Set up groups of users (developers, management, sales, etc) Chose which software to be installed for different groups Add new software to groups and it will be automatically installed on client Dependencies between software Nice-to-have: Linux client support OS-unattended installation thanks in advance

    Read the article

  • use of [!NOTFOUND=return] in nsswitch.conf

    - by Chris Phillips
    Has anyone come across the use of this config for passwd and groups config in nsswitch.conf? Where I'm working I've been told it's been shown to help situations where a group exists both locally and in ldap which was causing issues for group memberships etc. However this config seems to totally mess up nscd which will be aware of the groups and all their members but will not flip the data around to say the user is a member of all it's remote groups. Initially it seems, given a fully available environment, to be exactly the same as [FOUND=return] which is an implict default between stages anyway. However apparently a lengthy ticket with Redhat resulted in the recommended use of that configuration.

    Read the article

  • rsync creating thousands of ..ds_store files from mounted volume

    - by daniel Crabbe
    I've been using rsync on OS X to sync all our website admins. It was working fine until the OS X 10.6.3 update! Now it creates thousands of empty (0-kb) folders. It only does it when synching to a mounted network drive (which we need to do) as when I sync to my local drive it works as usual! I've tried excludes which don't seem to be working... also tried a different version of rsync so it's an OS X issue. echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up KINEMASTIK" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Users/dan/Dropbox/documents/WORK/kinemastik/WEBSITE/youradmin/ echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up CHRIS BROOKS YOURADMIN" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Volumes/Groups/Projects/516_ChrisBrooks/website/youradmin/ Has anyone experienced the same problem?

    Read the article

  • Why should I use Firewall Zones and not just Address Objects?

    - by SRobertJames
    I appreciate Firewall Address Objects and Address Groups - they simplify management by letting me give a name to a group of addresses. But I don't understand what Firewall Zones (LAN, WAN, DMZ, etc.) do for me over Address Groups. I know all firewalls have them, so there must be a good reason. But what do I gain by stating a rule applies to all traffic from LAN Zone to WAN Zone which comes from LAN Address Group to WAN Address Group? Why not just mention the Address Groups?

    Read the article

  • Using LDAP Attributes to improve performance for large directories

    - by Vineet Bhatia
    We have a LDAP directory with more than 50,000 users in it. LDAP Vendor suggests maximum limit of 40,000 users per LDAP group. We have number of inactive users and those are being purged but what if we don't get below the 40,000 users? Would switching to using multivalued attribute at user record level instead of using LDAP groups yield better performance during authentication, adding new users, etc? I know most server software (portal, application servers, etc) use LDAP groups. But, we have a standardized web service interface for access control instead of relying on server software to map LDAP groups to security roles. Each application uses this common "access control web service". Security roles are used within application to build fine-grained ACL used within each enterprise application.

    Read the article

  • Sendmail - preventing aliased users from receiving multiple copies of the same email

    - by MikeQ
    Is there any way to prevent a user from receiving multiple copies of the same email if an email is sent to both an alias for the user as well as the user themselves? For example, suppose bob.smith is a included in the alias list for developers (@company.com) If I send the email to both the user and an alias for the user: To: [email protected], [email protected] ... is there any way to prevent user Bob from receiving the same email two times? EDIT: I've observed that if Bob is a member of two different alias groups, and I send an email just to those two groups (not the user directly), sendmail correctly expands the groups and removes the duplicate. The behavior I want to fix occurs when you send directly to the user AND a group they belong to.

    Read the article

  • SQL 2000 and group names

    - by Nasa
    I have a SQL 2000 server which has databases, under user section of the database object, I have some NT 4.0 groups. These groups were migrated over to Active Directory some time ago using ADMT with SID history. The original source domain groups have since been deleted. The access shown is olddomain\groupname. I don't know why, if they were ntfs permissions they would update automatically to target\groupname. The users in the AD domain still have access to the database as they are a member of the migrated group (Target\groupname). I was wondering 1) Why does the old group (source\groupname) show up as it doesn't exist anymore. But access is still granted to the target group? 2) Is there any easy way to update the group name from source\groupname to target\groupname? Thanks for any help.

    Read the article

  • NTFS Permission Structure to allow Traversal but no Modification except in Leaf Nodes?

    - by pepoluan
    Assume there's this folder structure: D:\ --+-- Acctg --+-- Payable | +-- Receivable | +-- Fin --+-- Inv | +-- Tax | +-- Treas | +-- Mrktg --+-- Ads +-- Promo Users are not allowed to change the structure, but they are free to create & delete files & folders in the leaf nodes (i.e., the rightmost folders). AGDLP principle said that I should assign permissions on the above folders to DL-Groups. Let's say I have a G-Group of users, G-Accounting-Payable, containing users that have access to the D:\Acctg\Payable folder. The way I see it, I have two strategies: - Strategy 1 Create three DL-Groups and assign them permissions: DL-D-Acctg_T -- allowed traversal of D:\Acctg folder DL-D-Acctg-Pay_LF -- allowed listing of D:\Acctg\Payable folder contents DL-D-Acctg-Pay__RW -- allowed full permissions to the contents of D:\Acctg\Payable folder Add G-Accounting-Payable as member to all the above DL-Groups - Strategy 2 Create just one DL-Group DL-D-Acctg-Pay__RW, and assign it the proper permissions for each level of the folder. Then, add G-Accounting-Payable as member to that DL-Group. - Which strategy is the Recommended Best Practice, and why?

    Read the article

  • Windows Server 2008: How to tell if a user is a 'local' use or a 'domain' user

    - by David
    I'm a developer, not a server admin, so please bear with me! I've been tasked with checking the installation of some software on a Windows Server 2008 R2 machine in the cloud, within two scenarios: There is no domain, the software will use local users and groups for authentication There is a domain, the software will use domain users and groups for authentication I've done part 1, but I'm puzzled about part 2. I've just installed the Active Directory Domain Services role on the server, so now I have a domain of one computer. When I look in Active Directory Users and Computers, I see all my original local users and groups. Have they now been 'promoted' to domain users? Or do I not have any domain users yet? Is there a way I can tell the difference between domain users and local users now? Thanks

    Read the article

  • Why S3 website redirect location is not followed by CloudFront?

    - by ychaze
    I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata: Website Redirect Location= /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks

    Read the article

  • SCCM 2012: How to properly update the content of an application?

    - by Omnomnomnom
    I recently set up a new SCCM 2012 environment at my workplace and now we are creating our applications for distribution. Some applications are set up using a script. When during testing, something was not right and the content of the application needs to be changed. The distribution point keeps on serving the old content to the clients. I was wondering what the proper procedure is for updating the DP's when the content of an application changes. I have tried redistributing to the distribution points and deleting old revisions but to no avail.

    Read the article

  • The maven assembly plugin is not using the finalName for installing with attach=true?

    - by Roland Wiesemann
    I have configured following assembly: <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <executions> <execution> <id>${project.name}-test-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>${project.name}-test</finalName> <filters> <filter>src/assemble/test/distribution.properties</filter> </filters> <descriptors> <descriptor>src/assemble/distribution.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>${project.name}-prod-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>${project.name}-prod</finalName> <filters> <filter>src/assemble/prod/distribution.properties</filter> </filters> <descriptors> <descriptor>src/assemble/distribution.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> This produced two zip-files: distribution-prod.zip distribution-test.zip My expectation for the property attach=true is, that the two zip-files are installed with the name as given in property finalName. But the result is, only one file is installed (attached) to the artifact. The maven protocol is: distrib-0.1-SNAPSHOT.zip distrib-0.1-SNAPSHOT.zip The plugin is using the artifact-id instead of property finalName! Is this a bug? The last installation is overwriting the first one. What can i do to install this two files with different names? Thanks for your investigation. Roland

    Read the article

  • ANT: ways to include libraries and license issues

    - by Eric Tobias
    I have been trying to use Ant to compile and ready a project for distribution. I have encountered several problems along the way that I have been finally able to solve but the solution leaves me very unsatisfied. First, let me explain the set-up of the project and its dependencies. I have a project, lets call it Primary which depends on a couple of libraries such as the fantastic Guava. It also depends on another project of mine, lets call it Secondary. The Secondary project also features some dependencies, for example, JDOM2. I have referenced the Jar I build with Ant in Primary. Let me give you the interesting bits of the build.xml so you can get a picture of what I am doing: <project name="Primary" default="all" basedir="."> <property name='build' location='dist' /> <property name='application.version' value='1.0'/> <property name='application.name' value='Primary'/> <property name='distribution' value='${application.name}-${application.version}'/> <path id='compile.classpath'> <fileset dir='libs'> <include name='*.jar'/> </fileset> </path> <target name='compile' description='Compile source files.'> <javac includeantruntime="false" srcdir="src" destdir="bin"> <classpath refid='compile.classpath'/> </javac> <target> <target name='jar' description='Create a jar file for distribution.' depends="compile"> <jar destfile='${build}/${distribution}.jar'> <fileset dir="bin"/> <zipgroupfileset dir="libs" includes="*.jar"/> </jar> </target> The Secodnary project's build.xml is nearly identical except that it features a manifest as it needs to run: <target name='jar' description='Create a jar file for distribution.' depends="compile"> <jar destfile='${dist}/${distribution}.jar' basedir="${build}" > <fileset dir="${build}"/> <zipgroupfileset dir="libs" includes="*.jar"/> <manifest> <attribute name="Main-Class" value="lu.tudor.ssi.kiss.climate.ClimateChange"/> </manifest> </jar> </target> After I got it working, trying for many hours to not include that dependencies as class files but as Jars, I don't have the time or insight to go back and try to figure out what I did wrong. Furthermore, I believe that including these libraries as class files is bad practice as it could give rise to licensing issues while not packaging them and merely including them in a directory along the build Jar would most probably not (And if it would you could choose not to distribute them yourself). I think my inability to correctly assemble the class path, I always received NoClassDefFoundError for classes or libraries in the Primary project when launching Second's Jar, is that I am not very experienced with Ant. Would I require to specify a class path for both projects? Specifying the class path as . should have allowed me to simply add all dependencies to the same folder as Secondary's Jar, should it not?

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • "ldap_add: Naming violation (64)" error when configuring OpenLDAP

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • How to Configure OpenLDAP on Ubuntu 10.04 Server

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • How to configure ldap on ubuntu 10.04 server

    - by user3215
    I am following the link to configure ldap on ubuntu 10.04 server but could not. when I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Anybody could help me?

    Read the article

  • ExaLogic 2.01 ppt & training & Installation check-list & tips & Web tier roadmap

    - by JuergenKress
    For partners with an ExaLogic opportunity or an ExaLogic demo center we plan to offer an hands-on ExaLogic bootcamp. If you want to attend, please make sure that you add your details to our wiki: ExaLogic checklist Exalogic Installation checklist 08.2012.pdf Exalogic Installation Tips and Tricks 08.2012.pdf Oracle FMW Web Tier Roadmap .pptx (Oracle and Partner confidential) ExaLogic Vision CVC 08.2012.pptx Online Launch Event: Introducing Oracle Exalogic Elastic Cloud Software 2.0 Webcast Replay For the complete ExaLogic partner kit, please visit the WebLogic Community Workspace (WebLogic Community membership required). Exalogic Distribution Rights Update Oracle have recently modified the criteria for obtaining Distribution Rights (resell rights) for Oracle Exadata Database Machine and Exalogic Elastic Cloud. Partners will NO longer be required to be specialized in these products or in their underlying product sets in order to attain Distribution Rights. There are, however, competency criteria that partners must meet, and partners must still apply for the respective Distributions Rights. Please note, there are no changes to the criteria to become EXADATA or EXALOGIC Specialized. List of Criteria is available on the Sell tab of the he Exalogic Elastic Cloud Knowledge Zone WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic,Exalogic training,education,training,Exalogic roadmap,exalogic installation,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Camunda BPM 7.0 on WebLogic 12c

    - by JuergenKress
    If we go on tour together with Oracle I think we have to have camunda BPM running on the Oracle WebLogic application server 12c (WLS in short). And one of our enterprise customers asked - so I invested a Sunday and got it running (okay - to be honest - I needed quite some help from our Java EE server guru Christian). In this blog post I give a step by step description how run camunda BPM on WLS. Please note that this is not an official distribution (which would include a sophisticated QA, a comprehensive documentation and a proper distribution) - it was my personal hobby. And I did not fire the whole test suite agains WLS - so there might be some issues. We will do the real productization as soon as we have a customer for it (let us know if this is interesting for you). Necessary steps After installing and starting up WLS (I used the zip distribution of WLS 12c by the way) you have to do: Add a datasource Add shared libraries Add a resource adapter (for the Job Executor using a proper WorkManager from WLS) Add an EAR starting up one process engine Add a WAR file containing the REST API Add other WAR files (e.g. cockpit) and your own process applications Actually that sounds more work to do than it is ;-) So let's get started: Add a datasource Add a datasource via the Administration Console (or any other convenient way on WLS - I should admit that personally I am not the WLS expert). Make sure that you target it on your server - this is not done by default: Read the full article here. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Camunda,BPM,JavaEE7,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Asset and Work Management in Utilities: An Integrated Enterprise Success Story

    - by stephen.slade(at)oracle.com
    Jan 11 '11 Webcast: Utilities are turning to Oracle to deliver an integrated EAM platform that manages all of their assets from fleet to facilities and distribution to generation. Hear from solutions experts and from Sunflower Electric Power Corporation about how an integrated enterprise asset and work management system helped them deliver bottom line results Do you have different work management systems for generation, distribution, and transmission? Fleet maintenance? Facilities? Are you on the latest release of these products? Have you considered your options when the product is no longer supported? Do you struggle with integration and keeping the various systems "in balance"? Do you have trouble retrieving data from these disparate systems and getting an enterprise view of asset and work management operations? Utilities are challenged to better manage information on generation, transmission and distribution assets. Point solutions for Enterprise Asset Management (EAM) and Computerized Maintenance Management Systems (CMMS) are often effective as departmental solutions but have limited ability to deliver an enterprise solution with accessible business intelligence. Date:  January 11, 2011 @ 10am PT/1pm ET EVITE:  http://www.oracle.com/us/dm/h2fy11/63025-wwmk10040611mpp054c003-se-197386.html Register: HERE

    Read the article

  • Mise à jour disponible de la tarification et des meilleures pratiques partenaire

    - by swalker
    Cliquez ici pour accéder à la mise à jour des meilleures pratiques partenaire du 25 octobre 2011 * (PDF) Que contient la mise à jour de la tarification et des meilleures pratiques partenaire du 25 octobre ? Tarification et licence Exalogic and SPARC SuperCluster Update Oracle Mise à jour concernant les technologies Mise à jour concernant Oracle Fusion Applications Mise à jour concernant le service Oracle Fusion Cloud Mise à jour concernant Oracle Application Integration Architecture Mise à jour concernant les applications Siebel CRM Mise à jour concernant Oracle CRM On Demand Mise à jour concernant Business Process Outsourcing Nonobstant toute disposition contraire des contrats de distribution des partenaires, les devis valides existants émis par des partenaires à l'intention d'utilisateurs finaux avant le 1er septembre 2011, et qui sont affectés par la mise à jour de la tarification et des conditions de licence du 25 octobre 2011 restent valables et les commandes passées par des partenaires en vertu de ces devis seront honorées jusqu'au 30 novembre 2011. Les devis émis par des partenaires à l'intention d'utilisateurs finaux en date du ou après le 1er septembre 2011 sont soumis aux conditions générales des contrats de distribution des partenaires. Que devez-vous faire ? Rendez-vous régulièrement sur la page des Mises à jour de la tarification et des meilleures pratiques partenaire du portail OPN pour en savoir plus sur ces mises à jour, connaître les derniers tarifs, conditions de licence et meilleures pratiques partenaire et accéder aux ressources applicables. Informations complémentaires Pour accéder aux mises à jour de la tarification et des meilleures pratiques partenaire et aux archives de toutes les mises à jour de la tarification et des meilleures pratiques partenaire, cliquez ici. * Oracle Confidentiel : Les informations contenues dans cette communication s'adressent aux membres du programme Oracle PartnerNetwork. Ces informations sont des informations confidentielles Oracle et ne peuvent être utilisées que dans le cadre de la distribution ou de l'implémentation par vos soins de produits ou services Oracle à des utilisateurs finaux ou à des partenaires Oracle agréés.

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • South Florida Stony Brook Alumni &amp; Friends Reception 2011

    - by Sam Abraham
    It’s official, we are kicking off a local South Florida Chapter for Stony Brook alumni and friends in the area to keep in touch.  Our first networking event will be taking place at Champps, Ft Lauderdale on November 17th, 6:00-8:00 PM. Admission is free and open for everyone, whether or not they are Stony Brook Alums. The team at Champps is offering us great specials (Happy hour deals, half-price appetizers,etc.) that we can choose to enjoy while we network and catch up. (Event Announcement: http://alumniandfriends.stonybrook.edu/page.aspx?pid=299&cid=1&ceid=171&cerid=0&cdt=11%2f17%2f2011) I look forward to share and revive my college experience which I believe was the starting line of my ongoing life journey. It would be also great to hear others’ take as they reflect on their experiences throughout their college years. I invite anyone interested in keeping in touch with friends and alums of Stony Brook to join our LinkedIn or Facebook groups.   The Stony Brook Alumni Association – South Florida Chapter LinkedIn Group: http://www.linkedin.com/groups?gid=3665306&trk=myg_ugrp_ovr The Stony Brook Alumni Association – South Florida Chapter Facebook Group: http://www.facebook.com/#!/groups/114760941910314/

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >