Search Results

Search found 29197 results on 1168 pages for 'oracle mysql training'.

Page 564/1168 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Server specification recommendation

    - by foo
    To cut the story short, I can't buy an item (server/cpu/motherboard/ram) that costs more than USD 330. However, I can combine them, meaning, I can buy a CPU that costs USD 330 and motherboard that costs USD 330. With this limitation, I can't buy a powerful 1U server which will definitely costs me more USD 330. With that in mind, I was hoping to build a powerful desktop PC which will be used as a database server. However, through my experience, desktop PC doesn't last very long, usually the motherboard will just die by itself after 1 or 2 years. So, what would you guys recommend me to buy with this kind of budget? Every item must be <= USD 330. Will be used as a MySQL server. RAID would be nice. 1TB is pretty big for my data. I do not need external graphic card (onboard would do just fine), mouse, keyboard, monitor. Linux friendly. One ethernet port is good enough. It's important that those hardware is made of components that will last long (at least 3 years or something). The server will be placed in an air conditioned room, but a good ventilation for the server is always preferred. I won't overclock it. Intel processor is preferred. Thanks in advance.

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • please take a look at my server's ram usage

    - by user66779
    Hi, i am a noob with servers. I have a centos5.5 vps with 512mb ram. My goal is to have it host just one magento store. I've installed Magento on the server without any control panel, by just installing lamp myself and whatever php extensions were necessary to get Magento to install. As soon as i visit my magento store, suddenly the ram on the vps is almost completely used, with only about 100mb left. Please see this screenshot of htop taken after just myself visited the website. http://img714.imageshack.us/img714/1944/screenouv.png As you can see there's only around 100mb left. Is that normal? I'm wondering if i might have done something stupid with the server that makes it very resource hungry. I installed apache from the centos base repo, php version 5.3 from the ius repository and mysql 5.1 also from ius repo. I haven't changed any of the default config files for any of these except to make memory_minimum 256 in php.ini. Is there anything i can do to make more ram free? I'm clueless but i see each Apache daemon is using 8% of available ram, and AFAIK each visitor needs one Apache daemon. So i would run out of ram with just a handful of visitors. Thanks for your advice.

    Read the article

  • Oracle Hibernate with in Netbean RCP

    - by jurnaltejo
    All, i have a problem with hibernate using netbean platform 6.8, i have been search around internet, but cannot found the suitable answer This is my story. i am using oracle database as data source of my hibernate entity with ojdbc14.jar driver. First i create hibernate entity tobe wrapped latter in a netbeans module, i tested the hibernate connection configuration and everything just works well. i can connect to oracle database successfuly, every hibernate query works well. Then i wrapped that hibernate entity jar as a netbeans module, create another module to warp my ojdbc14.jar then i test it. and, im using hibernate library dependency that available on netbean platform (netbean 6.8), but unfutornatelly i got oracle sql error saying “no suitable driver for [connection url]” when running the project. thats quite weird since it doesn’t happend when I test it before with out netbean platform. i thought that is related to netbeans lazy loading issue, i am not sure,. any idea ? tq for help

    Read the article

  • Team activity/game for illustrating design in a SCRUM environment

    - by njreed.myopenid.com
    I'm looking for a team building / training activity for some of my scrum teams. I want something that really illustrates the flexibility that the team has when implementing stories to define the scope and complexity of the feature themselves. Most of the teams have long-term waterfall experience and are used to having a well-defined specification. I'm looking for something that illustrates the need for the team to vary the scope of what they are building themselves, dependent on the time and resources available. I couldn't find anything at tastycupcakes.com and Google wasn't much help. Maybe someone has prepared something themselves they would care to share?

    Read the article

  • Ruby Gem LoadError mysql2/mysql2 required

    - by Kalli Dalli
    Im trying to setup my rails server on OSX 10.8 but I can't get my rails server to run. - Currently Im using a Zend Server with mysql 5.1. - I also have istalled brew and brew mysql. - And I used: gem install mysql2 -- --srcdir=/usr/local/mysql/include --with-opt-include=/usr/local/mysql/include the server worked already but now, I always get this loadError below. This is what my Gemfile says: ralphs-macbook-pro:admin-mockup zero$ bundle install Using rake (10.0.2) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.7) Using builder (3.0.4) Using activemodel (3.2.7) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.7) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.7) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.7) Using activeresource (3.2.7) Using annotate (2.5.0) Using coffee-script-source (1.4.0) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.7) Using coffee-rails (3.2.2) Using columnize (0.3.6) Using debugger-ruby_core_source (1.1.5) Using debugger-linecache (1.1.2) Using debugger (1.2.2) Using formtastic (2.2.1) Using haml (3.1.7) Using haml-rails (0.3.5) Using hirb (0.7.0) Using hpricot (0.8.6) Using jquery-rails (2.1.4) Using kgio (2.7.4) Using mysql2 (0.3.11) Using php_serialize (1.2) Using polyamorous (0.5.0) Using rabl (0.7.8) Using railroady (1.1.0) Using bundler (1.2.3) Using rails (3.2.7) Using raindrops (0.10.0) Using randumb (0.3.0) Using sass (3.2.3) Using sass-rails (3.2.5) Using squeel (1.0.13) Using uglifier (1.3.0) Using unicorn (4.4.0) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed. And after starting rails s /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `require': cannot load such file -- mysql2/mysql2 (LoadError) from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `block in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler.rb:128:in `require' from /Users/zero/GitHub/admin-mockup/config/application.rb:7:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `tap' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Thx for any help!

    Read the article

  • Why does Perl's DBI complain about "failed: ERROR OCIEnvNlsCreate" when I try to connect to Oracle 1

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple Perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before?

    Read the article

  • How do I store a string longer than 4000 characters in an Oracle Database using Java/JDBC?

    - by Ventrue
    I’m not sure how to use Java/JDBC to insert a very long string into an Oracle database. I have a String which is greater than 4000 characters, lets say it’s 6000. I want to take this string and store it in an Oracle database. The way to do this seems to be with the CLOB datatype. Okay, so I declared the column as description CLOB. Now, when it comes time to actually insert the data, I have a prepared statement pstmt. It looks like pstmt = conn.prepareStatement(“INSERT INTO Table VALUES(?)”). So I want to use the method pstmt.setClob(). However, I don’t know how to create a Clob object with my String in it; there’s no constructor (presumably because it can be potentially much larger than available memory). How do I put my String into a Clob? Keep in mind I’m not a very experienced programmer; please try to keep the explanations as simple as possible. Efficiency, good practices, etc. are not a concern here, I just want the absolute easiest solution. I’d like to avoid downloading other packages if it all possible; right now I’m just using JDK 1.4 and what is labelled “ojdbc14.jar”. I've looked around a bit but I haven't been able to follow any of the explanations I've found. If you have a solution that doesn’t use Clobs, I’d be open to that as well, but it has to be one column.

    Read the article

  • Unable to edit a database row from JSF

    - by user1924104
    Hi guys i have a data table in JSF which displays all of the contents of my database table, it displays it fine, i also have a delete function that can successfully delete from the database fine and updates the data table fine however when i try to update the database i get the error java.lang.IllegalArgumentException: Cannot convert richard.test.User@129d62a7 of type class richard.test.User to long below is the code that i have been using to delete the rows in the database that is working fine : public void delete(long userID) { PreparedStatement ps = null; Connection con = null; if (userID != 0) { try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); String sql = "DELETE FROM user1 WHERE userId=" + userID; ps = con.prepareStatement(sql); int i = ps.executeUpdate(); if (i > 0) { System.out.println("Row deleted successfully"); } } catch (Exception e) { e.printStackTrace(); } finally { try { con.close(); ps.close(); } catch (Exception e) { e.printStackTrace(); } } } } i simply wanted to edit the above code so it would update the records instead of deleting them so i edited it to look like : public void editData(long userID) { PreparedStatement ps = null; Connection con = null; if (userID != 0) { try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); String sql = "UPDATE user1 set name = '"+name+"', email = '"+ email +"', address = '"+address+"' WHERE userId=" + userID; ps = con.prepareStatement(sql); int i = ps.executeUpdate(); if (i > 0) { System.out.println("Row updated successfully"); } } catch (Exception e) { e.printStackTrace(); } finally { try { con.close(); ps.close(); } catch (Exception e) { e.printStackTrace(); } } } } and the xhmtl is : <p:dataTable id="dataTable" var="u" value="#{userBean.getUserList()}" paginator="true" rows="10" editable="true" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <p:column> <f:facet name="header"> User ID </f:facet> #{u.userID} </p:column> <p:column> <f:facet name="header"> Name </f:facet> #{u.name} </p:column> <p:column> <f:facet name="header"> Email </f:facet> #{u.email} </p:column> <p:column> <f:facet name="header"> Address </f:facet> #{u.address} </p:column> <p:column> <f:facet name="header"> Created Date </f:facet> #{u.created_date} </p:column> <p:column> <f:facet name="header"> Delete </f:facet> <h:commandButton value="Delete" action="#{user.delete(u.userID)}" /> </p:column> <p:column> <f:facet name="header"> Delete </f:facet> <h:commandButton value="Edit" action="#{user.editData(u)}" /> </p:column> currently when you press the edit button it will only update it with the same values as i haven't yet managed to get the datatable to be editable with the database, i have seen a few examples with an array list where the data table gets its values from but never a database so if you have any advice on this too it would be great thanks

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • deadlocks in the innodb status

    - by shantanuo
    Mysql sever has suddenly become very slow. There are no queries in the slow query log but the innodb status shows something like the following. Does it mean that it is due to innodb deadlock? if Yes, what is the way out? *************************** 1. row *************************** Status: ===================================== 100315 12:55:29 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 5 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 187532, signal count 188120 Mutex spin waits 0, rounds 61908654, OS waits 33052 RW-shared spins 89241, OS waits 41948; RW-excl spins 5857, OS waits 1557 ------------------------ LATEST DETECTED DEADLOCK ------------------------ 100315 12:43:02 *** (1) TRANSACTION: TRANSACTION 0 56996536, ACTIVE 0 sec, process no 5000, OS thread id 3031395216 starting index read mysql tables in use 1, locked 1 LOCK WAIT 6 lock struct(s), heap size 1024, undo log entries 6 MySQL thread id 994, query id 7699751 localhost application Searching rows for update UPDATE QUERY *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `dbII/tbl_ticket_block_master` trx id 0 56996536 lock_mode X locks r ec but not gap waiting Record lock, heap no 141 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837393936; asc 3587996;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 2; hex 6f6b; asc ok;; 4: le n 6; hex 0000035957fe; asc YW ;; 5: len 7; hex 000000401737c0; asc @ 7 ;; 6: SQL NULL; 7: SQL NULL; 8: SQL NULL; 9: len 3; hex 8fb46e; asc n;; 10: SQL NULL; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ;; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9ceebe ; asc K ;; 16: len 1; hex 30; asc 0;; 17: len 4; hex 80006ae8; asc j ;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;; *** (2) TRANSACTION: TRANSACTION 0 56996527, ACTIVE 0 sec, process no 5000, OS thread id 2961476496 fetching rows, thread declared inside InnoDB 237 mysql tables in use 3, locked 3 121 lock struct(s), heap size 11584, undo log entries 16 MySQL thread id 995, query id 7699729 localhost application Searching rows for update UPDATE QUERY *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `DBII/tbl_ticket_block_master` trx id 0 56996527 lock_mode X Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0 0: len 8; hex 73757072656d756d; asc supremum;; Record lock, heap no 2 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837343631; asc 3587461;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 6; hex 497373756564; asc Is sued;; 4: len 6; hex 000003425295; asc BR ;; 5: len 7; hex 8000000464012c; asc d ,;; 6: SQL NULL; 7: len 4; hex 80000058; asc X;; 8: len 1; hex 43; asc C;; 9: len 3; hex 8fb465; asc e;; 10: len 3; hex 8fb46d; asc m;; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ; ; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9b33a2; asc K 3 ;; 16: len 3; hex 756d67; asc umg;; 17: len 4; hex 80006744; asc gD;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;;

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • Enabling OUD Entry Cache for large static groups

    - by Sylvain Duloutre
    Oracle Unified Directory can take advantage of several caches to improve performances. especially the so-called database cache and the file system cache. In addition to that, it is possible to use an entry cache to cache LDAP entries. By default, the entry cache is not used. In specific deployements involving large static groups, it may worth loading the group entries to the entry cache to speed up group membership and group-based aci evaluation. To do so, run the following commands: First, specify which entries should reside in the entry cache. In the commad below, only entries matching the LDAP filter " (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) " will be stored in the entry cache. dsconfig set-entry-cache-prop \          --cache-name FIFO \          --add include-filter:\(\|\(objctclass=groupOfNames\)\(objectclass=groupOfUniqueNames\)\)          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt Then enable the entry cache: dsconfig set-entry-cache-prop \          --cache-name FIFO \          --set enabled:true \          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt In addition to that, you can control how much memory the entry cache can use: oud@s96sec1d0-v3:/application/oud : dsconfig -X -n -p <ADMIN PORT> -D "cn=Directory Manager" -w <password> get-entry-cache-prop --cache-name FIFO Property           : Value(s) -------------------:----------------------------------------------------------- cache-level        : 1 enabled            : true exclude-filter     : - include-filter     : (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) max-entries        : 2147483647 max-memory-percent : 90 You can change the max-entries amd max-memory-percent properties to control the entry cache size using the dsconfig set-entry-cache-prop command.

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • Stretch in multiple components using af:popup, af:region, af:panelTabbed

    - by Arvinder Singh
    Case study: I have a pop-up(dialogue) that contains a region(separate taskflow) showing a tab. The contents of this tab is in a region having a separate taskflow. The jsff page of this taskflow contains a panelSplitter which in turn contains a table. In short the components are : pop-up(dialogue) --> region(separate taskflow) --> tab --> region(separate taskflow) --> panelSplitter --> table At times the tab is not displayed with 100% width or the table in panelSplitter is not 100% visible or the splitter is not visible. Maintaining the stretch for all the components is difficult......not any more!!! Below is the solution that you can make use of in many similar scenarios. I am mentioning the major code snippets affecting the stretch and alignment. pop-up: <af:popup> <af:dialog id="d2" type="none" title="" inlineStyle="width:1200px"> <af:region value="#{bindings.PriceChangePopupFlow1.regionModel}" id="r1"/> </af:dialog> The above region is a jsff containing multiple tabs. I am showing code for a single tab. I kept the tab in a panelStretchLayout. <af:panelStretchLayout id="psl1" topHeight="300px" styleClass="AFStretchWidth"> <af:panelTabbed id="pt1"> <af:showDetailItem text="PO Details" id="sdi1" stretchChildren="first" > <af:region value="#{bindings.PriceChangePurchaseOrderFlow1.regionModel}" id="r1" binding="# {pageFlowScope.priceChangePopupBean.poDetailsRegion}" /> This "region" displays a .jsff containing a table in a panelSplitter. <af:panelSplitter id="ps1"  orientation="horizontal" splitterPosition="700"> <f:facet name="first"> <af:panelHeader text="PurchaseOrder" id="ph1"> <af:table id="md1" rows="#{bindings.PurchaseOrderVO.rangeSize}" That's it!!! We're done... Note the stretchChildren="first" attribute in the af:showDetailItem. That does the trick for us. Oracle docs say the following about stretchChildren :  Valid Values: none, first The stretching behavior for children. Acceptable values include: "none": does not attempt to stretch any children (the default value and the value you need to use if you have more than a single child; also the value you need to use if the child does not support being stretched) "first": stretches the first child (not to be used if you have multiple children as such usage will produce unreliable results; also not to be used if the child does not support being stretched)

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • Use Case Actors - Primary versus Secondary

    - by Dave Burke
    The Unified Modeling Language (UML1) defines an Actor (from UseCases) as: An actor specifies a role played by a user or any other system that interacts with the subject. In Alistair Cockburn’s book “Writing Effective Use Cases” (2) Actors are further defined as follows: Primary Actor: The primary actor of a use case is the stakeholder that calls on the system to deliver one of its services. It has a goal with respect to the system – one that can be satisfied by its operation. The primary actor is often, but not always, the actor who triggers the use case. Supporting Actors: A supporting actor in a use case in an external actor that provides a service to the system under design. It might be a high-speed printer, a web service, or humans that have to do some research and get back to us. In a 2006 article (3) Cockburn refined the definitions slightly to read: Primary Actors: The Actor(s) using the system to achieve a goal. The Use Case documents the interactions between the system and the actors to achieve the goal of the primary actor. Secondary Actors: Actors that the system needs assistance from to achieve the primary actor’s goal. Finally, the Oracle Unified Method (OUM) concurs with the UML definition of Actors, along with Cockburn’s refinement, but OUM also includes the following: Secondary actors may or may not have goals that they expect to be satisfied by the use case, the primary actor always has a goal, and the use case exists to satisfy the primary actor. Now that we are on the same “page”, let’s consider two examples: A bank loan officer wants to review a loan application from a customer, and part of the process involves a real-time credit rating check. Use Case Name: Review Loan Application Primary Actor: Loan Officer Secondary Actors: Credit Rating System A Human Resources manager wants to change the job code of an employee, and as part of the process, automatically notify several other departments within the company of the change. Use Case Name: Maintain Job Code Primary Actor: Human Resources Manager Secondary Actors: None The first example is quite straight forward; we need to define the Secondary Actor because without the “Credit Rating System” we cannot successfully complete the Use Case. In other words, the goal of the Primary Actor is to successfully complete the Loan Application, but they need the explicit “help” of the Secondary Actor (Credit Rating System) to achieve this goal. The second example is where people sometimes get confused. Within OUM we would not include the “other departments” as Secondary Actors and therefore not include them on the Use Case diagram for the following reasons: The other departments are not required for the successful completion of the Use Case We are not expecting any response from the other departments (at least within the bounds of the Use Case under discussion) Having said that, within the detail of the Use Case Specification Main Success Scenario, we would include something like: “The system sends a notification to the related department heads (ref. Business Rule BR101)” Now let’s consider one final example. A Procurement Manager wants to place a “bid” for some goods using an On-Line Trading Community (B2B version of eBay) Use Case Name: Create Bid Primary Actor: Procurement Manager Secondary Actors: On-Line Trading Community You might wonder why the Trading Community is listed as a Secondary Actor, i.e. if all we are going to do is place a bid for a specific quantity of goods at a given price and send that off to the Trading Community, then why would the Trading Community need to “assist” in that Use Case? Well, once again, it comes back to the “User Experience” and how we want to optimize that when we think about our Use Case, and ultimately, when the developer comes to assembling some code. In this final example, the Procurement Manager cannot successfully complete the “Create Bid” Use Case until they receive an affirmative confirmation back from the Trading Community that the Bid has been accepted. Therefore, the Trading Community must become a Secondary Actor and be referenced both on the Use Case diagram and Use Case Specification. Any astute readers who are wondering about the “single sitting” rule will have to wait for a follow-up Blog entry to find out how that consideration can be factored in!!! Happy Use Case writing! (1) OMG Unified Modeling LanguageTM (OMG UML), Superstructure Version 2.4.1 (2) Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1 (3) Cockburn, A, 2006 “Use Case fundamentals” viewed 20th March 2012, http://alistair.cockburn.us/Use+case+fundamentals

    Read the article

  • 2013 Predictions for Retail

    - by David Dorf
    Its that time of year to roll out the predictions for next year.  I can't say I've really nailed it in the past, but feel free to look back at my 2012, 2011, and 2010 predictions.  I'm not expecting anything earth-shattering this year; just continued maturation of several technologies that are finally taking hold. 1. Next day delivery -- Amazon finally decided it wasn't worth fighting state taxes and instead decided to place distribution centers everywhere so they can potentially offer next-day deliveries.  Not to be outdone, Walmart is looking to leverage its huge physical presence to offer the same.  Clubs like ShopRunner are pushing delivery barriers as well, so the norm is shifting to free shipping in a few days or relatively cheap shipping overnight.  Retailers need be thinking about how to ship from physical stores. 2. Bring your own device -- Earlier this year Intuit bought AisleBuyer, a mobile self-checkout start-up, at least somewhat validating the BYOD approach.  Grocery stores, especially in Europe, have been supporting in-aisle self-scanning for a while and I'm betting it will find a home in certain verticals in the US too.  There's also the BYOD concept for employees.  Some retailers are considering issuing mobile devices at hiring along side the shirt and name-tag.  Employees become responsible for the hardware until they leave. 3. TV shopping -- Will Apple finally release a TV product in 2013?  Who knows?  But the industry isn't standing still. Companies like QVC and HSN are already successfully combining the TV and online experiences for shopping.  Comcast is partnering with Tivo to allow viewers to interact with ads with Paypal handing payment.  This will be a slow maturation, but expect TVs to get smarter and eventually become a new selling channel (pun intended) for retailers. 4. Privacy backlash -- It only takes one big incident to stir the public, and I'm betting we have one in 2013.  Facebook, Google, or Apple will test the boundaries of what the public is willing to accept.  It could involve a retailer using geo-location technology, or possibly video analytics.  And as is always the case, the offender will apologize, temporarily remove the technology, and wait 2-3 years for it to be generally accepted.  Privacy is a moving target. 5. More NFC -- I've come to the conclusion that adoption of any banking technology is going to be slow.  It was slow for credit cards, ATMs, and online billpay so why should it be any different for NFC?  Maybe, just maybe the iPhone 5S will have an NFC chip, but we're not going to see mainstream uptake for years.  Next year we'll continue to see incremental improvements from Isis, Google, and Paypal and a plethora of new startups, but don't toss your magstripe cards just yet. 6. In-store location -- The technologies for tracking people inside stores is really improving.  Retailers can track people using video cameras, infrared, and by the WiFi radios in mobile phones.  We're getting closer to the point where accuracy could be a shelf-facing, which will help retailers understand how people shop, where they spend time, and what displays attract them.  Expect CPG companies to get involved and partner with retailers, since the data benefits both parties.  Consumers will benefit by being directed right to the products they seek.  (In 2013 ARTS is forming a workteam to develop new standards in this area.) 7. M&A -- Looking back at 2012 there were some really big deals involving IBM, Oracle, JDA, and NCR and I expect that trend will likely continue as vendors add assets to bolster their portfolios.  Many retailers are due for an IT transformation to support anywhere, anytime shoppers, and one-stop-vendors can minimize complexity and costs. Predictions from other sources: Independent Retailer Stores Magazine IDC Insights Mobile Commerce Daily

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • Hey Retailers, Are You Ready For The Holiday Season?

    - by Jeri Kelley
    With online holiday spending reaching $35.3 billion in 2011 and American shoppers spending just under $750 on average on their holiday purchases this year, how ready is your business for the 2012 holiday season?   ?? Today’s shoppers do not take their purchases lightly.  They are more connected, interact with more resources to make decisions, diligently compare products and services, seek out the best deals, and ask for input from friends and family.   This holiday season, as consumers browse for apparel, tablets, toys, and much more, they will be bombarded with retailer communication - from emails and commercials to countless search engine results and social recommendations.  With a flurry of activity coming at consumers from every channel and competitor, your success this year will rely on communicating a consistent, personalized message no matter where your customers are shopping.  Here are a few ideas to help with your commerce strategy this holiday season: CONSISTENCY COUNTS FOR MULTICHANNEL SHOPPERS??According to a November 2011 study commissioned by Oracle, “Channel Commerce 2011: The Consumer View,” 54% of consumers in the U.S. and Canada regularly employ two or more channels before they make a purchase.  While each channel has its own unique benefit, user profile, and purpose, it’s critical that your shoppers have a consistent core experience wherever they’re looking for information or making a purchase.  Be sure consumers can consistently search and browse the same product information and receive the same promotions online, on their mobile devices, and in-store.? USE YOUR CUSTOMER’S CONTEXT TO SURFACE RELEVANT CONTENTYour Web site is likely the hub of your holiday activity.  According to a Monetate infographic, 39% of shoppers will visit your Web site directly to find out about the best holiday deals.   Use everything you know about your customers from past purchase data to browsing history to provide a relevant experience at every click, and assemble content in a context that entices shoppers to buy online, or influences an offline purchase.? TAKE ADVANTAGE OF MOBILE BEHAVIOR?Having a mobile program is no longer a choice.   Armed with smartphones and tablets, consumers now have access to more and more product information and can compare products and prices from anywhere.  In fact, approximately 52% of smartphone users will use their device to research products, redeem coupons and use apps to assist in their holiday gift purchase.  At a minimum, be sure your mobile environment has store information, consistent pricing and promotions, and simple checkout capabilities. ARM IN-STORE ASSOCIATES WITH TABLETS?According to RISNews.com, 31% of retailers plan to begin testing tablets in stores in 2012, 22% have already begun such testing and 6% had fully deployed tablets within stores.   Take advantage of this compelling sales tool to get shoppers interacting with videos, user reviews, how-to guides, side-by-side product comparisons, and specs.  Automatically trigger upsell and cross sell suggestions for store associates to recommend for each product or category, build in alerts for promotions, and allow associates to place orders and check inventory from their tablet.  ? WISDOM OF THE CROWDS IS GOOD, BUT WISDOM FROM FRIENDS IS BETTER?Shoppers who grapple with options are looking for recommendations; they’d rather get advice from friends, and they’re more likely to spend more while doing so.    In fact, according to an infographic by Mr. Youth, 66% of social media users made a purchase on Black Friday or Cyber Monday as a direct result of social media interactions with brands or family.   This holiday season, be sure you are leveraging your social channels from Facebook to Pinterest to drive consistent promotions and help your brand to become part of the conversation. So, are you ready for the holidays this year?  

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >