Search Results

Search found 20265 results on 811 pages for 'oracle bi 11g scorecards dashboards strategy execution'.

Page 319/811 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • The Path to Best-In-Class Service Business Performance

    - by Charles Knapp
    What would it matter to offer your customers best-in-class service and support experiences? According to a new study, best-in-class companies enjoy margins that are nearly double the average, retain almost all of their customers each year, deliver annual revenue growth that is six greater than average, and realize cost decreases rather than increases! What does it take to become best in class? Some of the keys are: Engage customers effectively and consistently across all channels Focus on mobility to improve reactive service performance Continue to transition from primarily reactive to proactive and predictive service performance Build the support structure for new services and service contracts Construct an engaged service delivery team Join the Aberdeen Group, Oracle, Infosys, and Hyundai Capital as we highlight the key stages in the service transformation journey and reveal how Best-in-Class organizations are equipping themselves to thrive in this new era of service. Please join us for "Service Excellence and the Path to Business Transformation" -- this Thursday, October 25, 8:00 AM PDT | 11:00 AM EDT | 3:00 PM GMT | 4:00 PM BST.

    Read the article

  • Good Bye FY'14

    - by rajeshr
    As we welcome Fiscal Year 2015 at Oracle, looking forward to an exciting times ahead, let me take a moment to reflect on the last few laps of the FY'14. No better way to do that putting up pictures that speaks thousand words. A huge batch who attended back to back programs on Solaris 11 Network Administration, Solaris ZFS Administration, Solaris Zone Administration (19 - 30 May 2014) at Hyderabad. Transition to Solaris 11 session (5-9 May 2014) at Hyderabad. Exadata Install & Maintenance session (28 April - 2 May 2014) at Singapore.

    Read the article

  • Last fortnight...

    - by rajeshr
    In the last fortnight, I had an opportunity to meet up with some very energetic folks, who actively participated in a couple of OU programs on Solaris 11 and MySQL. I thank them for their participation and hope all of 'em had a good learning experience showing up for Oracle University programs. As always, I'm publishing below a moment from each of the aforesaid programs. MySQL DBA session in Bangalore. It's unfair if I don't express my heartfelt thanks to each of 'em for a serious teach back session through out the training program and I wish to do so by publishing moments from each one's teach back assignment: Below is a class photograph from Solaris 11 Administration Session in Bangalore.

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. A Certification exam goes through a rigorous review process called a beta period. Here are a few advantages of taking a Beta Exam: Certification exams during the beta period count towards your Company Specialization. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. For more information click here. 

    Read the article

  • eSTEP TechCast - November 2013 Material available

    - by mseika
    Dear Partners,We would like to extend our sincere thanks to those of you who attended our TechCast on "The Operational Management benefits of Engineered Systems". The materials (presentation, replay) from the TechCast are now available for all of you via our eSTEP portal.  You will need to provide your email address and the pin below to access the downloads. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011The material can be found under tab eSTEP TechCast.  Feel free to explore also the other delivered TechCasts and more useful information under the Download and Links tab. Any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Why Deliver Customer Service in the Cloud?

    - by Charles Knapp
    In volatile, competitive markets, delivering exceptional service across channels is essential. But delivering world-class service on tight budgets, and deliving improvements quickly, is a tough challenge. That's why so many of the world's most successful organizations choose to deliver customer service in the cloud. Example: Michele Watson, VP of Global Customer Care at Match.com, says Oracle's service in the cloud "helps our customer receive the support they need in real time, our contact center agents be more productive and helpful, and our executive and product development teams receive detailed feedback to continue our improve our customers' experience." Learn more here about why you should consider delivering customer service in the cloud. 

    Read the article

  • 'outlier': I/O ???

    - by katsumii
    ??? outlier ???????????????????????????????? - Wikipedia???(????)????????????????????????????????????????????????????????????????outlier site:docs.oracle.com - Google SearchOutlier Update Percent (MRP and Supply Chain Planning Help) Oracle Demantra Implementation Guide OraSVMClassificationSettings (Oracle Data Mining Java API ... Defining a Forecast Set (MRP and Supply Chain Planning Help)????????????????????? I/O ???????????? ????????? 'Exadata' ? 'outlier' ???????????????????????????????Guy Harrison - Yet Another Database Blog - Exadata Smart Flash Logging–Outliersflash log feature was effective in eliminating or at least reducing very extreme outlying redo log sync times.????????? Solaris 11.1 ?? I/O ??????????????????????Oracle Announces Availability of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1Oracle Solaris 11.1 exposes OracleSolaris DTraceI/O interfaces that allow an Oracle Database administrator to identify I/O outliers and subsequently isolate network or storage bottlenecks.

    Read the article

  • PeopleSoft HCM?????????????????????????????

    - by user775380
    ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????  - ?????????  - ?????????????????????  - ????(???????????????/??/????????)  - ???????????????? ???????????????????????????ERP???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????Hyperion???????????????????????????????????????????????CSE??????PeopleSoft HCM?????????Essbase????????????????????????????????????????????????????PeopleSoft?HCM????????????????????????(Essbase????/???????????????????????????)  Essbase?????~Excel??????·?? Essbase?Excel?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Essbase?????????????????????????????????????????????PeopleSoft HCM???????????????????????????????????????????PeopleSoft????????????PeopleTools 8.52?Cube Builder?????????????? ???????????????????????????????????? ????: ????????????(12/5): ???????ERP???PeopleSoft Enterprise??????????? CSE?????: ???????????? (?????????????????????????????????) ITPro????: ????????????CSE?PeopleSoft???????? Oracle Essbase????: ????????????????

    Read the article

  • ????????2013?1??:11GR2 ?????????(???)

    - by Allen Gao
    ????1?????????????(???)?????????????????????????11gR2??????????,??????????????????????,??????????????????????     ????????????????????,???????????????,?????????????????,?????????????,??????????????,????????????????????????????     ??,???????????????,??????,???????(??? Oracle Support Lifecycle Advisors????),?????????????????????????? WebEx??????(???)?????:2013?1?15?15:00(????)  ????: 593 957 262 ????????  1. ??????:https://oracleaw.webex.com/oracleaw/onstage/g.php?d=593957262&t=a2. ?????????????????,???????????,?????????InterCall????????Webex???????,???????????,??????:    - ????ID: 71585555   - ????????: 1080 044 111 82   - ?????????: 1080 074 413 29   - ????: 8009 661 55     - ????: 0080 104 4259   - ????????????????MOS?? 1148600.1 ???????:????????????,??????????(31151003)??????(First Name and Last Name)

    Read the article

  • ????!DBA & Developer Day ??????????????????????????????!

    - by OTN-J Master
    ???(11?20?)??????????????????????????????Oracle DBA & Developer Day????????????????????????????????????????????????? ???????????????????????????????????·?????????????????????????????????????????????????????????????????? ???????????????”?”????????????????????????????????????????????????????????????????????????? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?????????????????????????????????????????!! OTN???????????????24?????????????????????????????????????????????·?????????????????????????????????????????12???????????????????OTN???????????????????????????? ???????·?????(MyProfile??)??????????????????????OTN???????????????????????·????????????????????????????????????·????????????????????????????? (?????????) >> OTN????????????????????

    Read the article

  • ????????????

    - by ???02
    ????????????Oracle Data Masking Pack???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????????????Oracle Data Masking????????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????1. ??????????????Oracle Enterpirse Manager?GUI????????????????????????????????????????????????????????????????????2. ???????????????????????? "???"????????????????????"?????"??????????????????"?????"????????????????????????????????????????????"?????"??????????????????????"?????"?????????????????3. Oracle Database??????? / ?? / ??????????????????????????????????????????4. ????????????Oracle Data Masking Pack??Oracle Enterprise Manager??????????????????1???????????????·????·??????????·???????????????????? ?????? Oracle Direct

    Read the article

  • ?????create or replace???PL/SQL??

    - by Liu Maclean(???)
    ????T.Askmaclean.com?????10gR2??????procedure,?????????create or replace ??????????????????,????Oracle???????????????????procedure? ??Maclean ??2?10gR2???????????PL/SQL?????: ??1: ??Flashback Query ????,?????????????flashback database,??????????create or replace???SQL??source$??????????undo data,????????????: SQL> select * from V$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; SQL> select current_scn from v$database; CURRENT_SCN -----------     2660057 create or replace procedure maclean_proc as begin -- I am new procedure execute immediate 'select 2 from dual'; end; / Procedure created. SQL> select current_scn from v$database; CURRENT_SCN -----------     2660113 SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 -- I am new procedure SYS        MACLEAN_PROC                   PROCEDURE             4 execute immediate 'select 2 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             5 end; SQL> create table old_source as select * from dba_source as of scn 2660057 where name='MACLEAN_PROC'; Table created. SQL> select * from old_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; ?????????scn??flashback query????,????????as of timestamp??????????,????PL/SQL????????????????undo??????????,????????????replace/drop ??????PL/SQL??? ??2 ??logminer??replace/drop PL/SQL?????SQL???DELETE??,??logminer?UNDO SQL???PL/SQL?????? ????????????????archivelog????,??????????????? minimal supplemental logging,??????????Unsupported SQLREDO???: create or replace?? ?? procedure???????SQL??????, ??????procedure????????????????, source$??????????????: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; Database altered. SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> SQL> oradebug setmypid; Statement processed. SQL> SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 2 from dual';   4  end;   5  / Procedure created. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4305.trc [oracle@vrh8 ~]$ egrep  "update|insert|delete|merge"  /s01/admin/G10R25/udump/g10r25_ora_4305.trc delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 insert into procedureinfo$(obj#,procedure#,overload#,procedurename,properties,itypeobj#) values (:1,:2,:3,:4,:5,:6) insert into argument$( obj#,procedure$,procedure#,overload#,position#,sequence#,level#,argument,type#,default#,in_out,length,precision#,scale,radix,charsetid,charsetform,properties,type_owner,type_name,type_subname,type_linkname,pls_type) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23) insert into procedureplsql$(obj#,procedure#,entrypoint#) values (:1,:2,:3) update procedure$ set audit$=:2,options=:3 where obj#=:1 delete from source$ where obj#=:1 insert into source$(obj#,line,source) values (:1,:2,:3) delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_char$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub2$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_sb4$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 insert into settings$(obj#, param, value) values (:1, :2, :3) delete from warning_settings$ where obj# = :1 insert into warning_settings$(obj#, warning_num, global_mod, property) values (:1, :2, :3, :4) delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 insert into dependency$(d_obj#,d_timestamp,order#,p_obj#,p_timestamp, property, d_attrs)values (:1,:2,:3,:4,:5,:6, :7) insert into access$(d_obj#,order#,columns,types) values (:1,:2,:3,:4) update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ?drop procedure??????source$???PL/SQL?????: SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> drop procedure maclean_proc; Procedure dropped. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4331.trc delete from context$ where obj#=:1 delete from dir$ where obj#=:1 delete from type_misc$ where obj#=:1 delete from library$ where obj#=:1 delete from procedure$ where obj#=:1 delete from javaobj$ where obj#=:1 delete from operator$ where obj#=:1 delete from opbinding$ where obj#=:1 delete from opancillary$ where obj#=:1 delete from oparg$ where obj# = :1 delete from com$ where obj#=:1 delete from source$ where obj#=:1 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 delete from objauth$ where obj#=:1 update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ??????????source$???redo: SQL> alter system switch logfile; System altered. SQL> select sequence#,name from v$archived_log where sequence#=(select max(sequence#) from v$archived_log);  SEQUENCE# ---------- NAME --------------------------------------------------------------------------------        242 /s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc SQL> exec dbms_logmnr.add_logfile ('/s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc',options => dbms_logmnr.new); PL/SQL procedure successfully completed. SQL> exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog); PL/SQL procedure successfully completed. SQL> select sql_redo,sql_undo from v$logmnr_contents where seg_name = 'SOURCE$' and operation='DELETE'; delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAN'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAO'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 1 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAP'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 1 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAQ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAJ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAK'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 2 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAL'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 2 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAM'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); ???? logminer???UNDO SQL???????source$????,?DELETE????????????,????SOURCE????????????PL/SQL???DDL???

    Read the article

  • ?IT????????????????????

    - by ????
    ????2011?11?9??IT?????????????ITGI Japan ???????2001???????IT????????????????? ?????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? IT????????????! ???????? IT????????????????????IT?????????????????????????????? ??IT???????????????????? ???????????? Oracle????????????????????????   ??????? ???????????????IFRS????????????????????????????????????????? ??????????????????????????IT??????????????????????????????   ??????????????IT??????????????????IT????????????????????????????????????????????????????????(GRC)?????????????????? ???????????????????LIXCL???NTT???????????????????????????????????? ?????? ??? ????????????? ??????????????????????????????????????    

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Entity Framework 4.3.1 Code based Migrations and Connector/Net 6.6

    - by GABMARTINEZ
     Code-based migrations is a new feature as part of the Connector/Net support for Entity Framework 4.3.1. In this tutorial we'll see how we can use it so we can keep track of the changes done to our database creating a new application using the code first approach. If you don't have a clear idea about how code first works we highly recommend you to check this subject before going further with this tutorial. Creating our Model and Database with Code First  From VS 2010  1. Create a new console application 2.  Add the latest Entity Framework official package using Package Manager Console (Tools Menu, then Library Package Manager -> Package Manager Console). In the Package Manager Console we have to type  Install-Package EntityFramework This will add the latest version of this library.  We will also need to make some changes to your config file. A <configSections> was added which contains the version you have from EntityFramework.  An <entityFramework> section was also added where you can set up some initialization. This section is optional and by default is generated to use SQL Express. Since we don't need it for now (we'll see more about it below) let's leave this section empty as shown below. 3. Create a new Model with a simple entity. 4. Enable Migrations to generate the our Configuration class. In the Package Manager Console we have to type  Enable-Migrations; This will make some changes in our application. It will create a new folder called Migrations where all the migrations representing the changes we do to our model.  It will also create a Configuration class that we'll be using to initialize our SQL Generator and some other values like if we want to enable Automatic Migrations.  You can see that it already has the name of our DbContext. You can also create you Configuration class manually. 5. Specify our Model Provider. We need to specify in our Class Configuration that we'll be using MySQLClient since this is not part of the generated code. Also please make sure you have added the MySql.Data and the MySql.Data.Entity references to your project. using MySql.Data.Entity;   // Add the MySQL.Data.Entity namespace public Configuration() { this.AutomaticMigrationsEnabled = false; SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator } 6. Add our Data Provider and set up our connection string <connectionStrings> <add name="PersonalContext" connectionString="server=localhost;User Id=root;database=Personal;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient" /> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.6.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> * The version recommended to use of Connector/Net is 6.6.2 or earlier. At this point we can create our database and then start working with Migrations. So let's do some data access so our database get's created. You can run your application and you'll get your database Personal as specified in our config file. Add our first migration Migrations are a great resource as we can have a record for all the changes done and will generate the MySQL statements required to apply these changes to the database. Let's add a new property to our Person class public string Email { get; set; } If you try to run your application it will throw an exception saying  The model backing the 'PersonelContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269). So as suggested let's add our first migration for this change. In the Package Manager Console let's type Add-Migration AddEmailColumn Now we have the corresponding class which generate the necessary operations to update our database. namespace MigrationsFromScratch.Migrations { using System.Data.Entity.Migrations; public partial class AddEmailColumn : DbMigration { public override void Up(){ AddColumn("People", "Email", c => c.String(unicode: false)); } public override void Down() { DropColumn("People", "Email"); } } } In the Package Manager Console let's type Update-Database Now you can check your database to see all changes were succesfully applied. Now let's add a second change and generate our second migration public class Person   {       [Key]       public int PersonId { get; set;}       public string Name { get; set; }       public string Address {get; set;}       public string Email { get; set; }       public List<Skill> Skills { get; set; }   }   public class Skill   {     [Key]     public int SkillId { get; set; }     public string Description { get; set; }   }   public class PersonelContext : DbContext   {     public DbSet<Person> Persons { get; set; }     public DbSet<Skill> Skills { get; set; }   } If you would like to customize any part of this code you can do that at this step. You can see there is the up method which can update your database and the down that can revert the changes done. If you customize any code you should make sure to customize in both methods. Now let's apply this change. Update-database -verbose I added the verbose flag so you can see all the SQL generated statements to be run. Downgrading changes So far we have always upgraded to the latest migration, but there may be times when you want downgrade to a specific migration. Let's say we want to return to the status we have before our last migration. We can use the -TargetMigration option to specify the migration we'd like to return. Also you can use the -verbose flag. If you like to go  back to the Initial state you can do: Update-Database -TargetMigration:$InitialDatabase  or equivalent: Update-Database -TargetMigration:0  Migrations doesn't allow by default a migration that would ocurr in a data loss. One case when you can got this message is for example in a DropColumn operation. You can override this configuration by setting AutomaticMigrationDataLossAllowed to true in the configuration class. Also you can set your Database Initializer in case you want that these Migrations can be applied automatically and you don't have to go all the way through creating a migration and updating later the changes. Let's see how. Database Initialization by Code We can specify an initialization strategy by using Database.SetInitializer (http://msdn.microsoft.com/en-us/library/gg679461(v=vs.103)). One of the strategies that I found very useful when you are at a development stage (I mean not for production) is the MigrateDatabaseToLatestVersion. This strategy will make all the necessary migrations each time there is a change in our model that needs a database replication, this also implies that we have to enable AutomaticMigrationsEnabled flag in our Configuration class. public Configuration()         {             AutomaticMigrationsEnabled = true;             AutomaticMigrationDataLossAllowed = true;             SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator          } In the new EntityFramework section of your Config file we can set this at a context level basis.  The syntax is as follows: <contexts> <context type="Custom DbContext name, Assembly name"> <databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[ Custom DbContext name, Assembly name],  [Configuration class name, Assembly name]],  EntityFramework" /> </context> </contexts> In our example this would be: The syntax is kind of odd but very convenient. This way all changes will always be applied when we do any data access in our application. There are a lot of new things to explore in EF 4.3.1 and Migrations so we'll continue writing some more posts about it. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Hope you found this information useful. Happy MySQL/.Net Coding! 

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • Cannot read value from SYS_CONTEXT

    - by AppleGrew
    I have a PL/SQL procedure which sets some variable in user session, like the following:- Dbms_Session.Set_Context( NAMESPACE =>'MY_CTX', ATTRIBUTE => 'FLAG_NAME', Value => 'some value'); Just after this (in the same procedure), I try to read the value of this flag, using:- SYS_CONTEXT('MY_CTX', 'FLAG_NAME'); The above returns nothing. How did the DB lose this value? The weirder part is that if I invoke this proc directly from Oracle SQL Developer then it works. It doesn't work when I invoke this proc from my web application from callable statement. --EDIT-- Added an example as to how we are invoking the proc from our Java code. String statement = "Begin package_name.proc_name( flag_val => :1); END;"; OracleCallableStatement st = <some object by some framework> .createCallableStatement(statement); st.setString(1, 'flag value'); st.execute(); st.close();

    Read the article

  • Optimizing MySQL, Improving Performance of Database Servers

    - by Antoinette O'Sullivan
    Optimization involves improving the performance of a database server and queries that run against it. Optimization reduces query execution time and optimized queries benefit everyone that uses the server. When the server runs more smoothly and processes more queries with less, it performs better as a whole. To learn more about how a MySQL developer can make a difference with optimization, take the MySQL Developers training course. This 5-day instructor-led course is available as: Live-Virtual Event: Attend a live class from your own desk - no travel required. Choose from a selection of events on the schedule to suit different timezones. In-Class Event: Travel to an education center to attend an event. Below is a selection of the events on the schedule.  Location  Date  Delivery Language  Vienna, Austria  17 November 2014  German  Brussels, Belgium  8 December 2014  English  Sao Paulo, Brazil  14 July 2014  Brazilian Portuguese London, English  29 September 2014  English   Belfast, Ireland  6 October 2014  English  Dublin, Ireland  27 October 2014  English  Milan, Italy  10 November 2014  Italian  Rome, Italy  21 July 2014  Italian  Nairobi, Kenya  14 July 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Utrecht, Netherlands  21 July 2014  English  Makati City, Philippines  29 September 2014  English  Warsaw, Poland  25 August 2014  Polish  Lisbon, Portugal  13 October 2014  European Portuguese  Porto, Portugal  13 October 2014  European Portuguese  Barcelona, Spain  7 July 2014  Spanish  Madrid, Spain  3 November 2014  Spanish  Valencia, Spain  24 November 2014  Spanish  Basel, Switzerland  4 August 2014  German  Bern, Switzerland  4 August 2014  German  Zurich, Switzerland  4 August 2014  German The MySQL for Developers course helps prepare you for the MySQL 5.6 Developers OCP certification exam. To register for an event, request an additional event or learn more about the authentic MySQL curriculum, go to http://education.oracle.com/mysql.

    Read the article

  • Creating a thematic map

    - by jsharma
    This post describes how to create a simple thematic map, just a state population layer, with no underlying map tile layer. The map shows states color-coded by total population. The map is interactive with info-windows and can be panned and zoomed. The sample code demonstrates the following: Displaying an interactive vector layer with no background map tile layer (i.e. purpose and use of the Universe object) Using a dynamic (i.e. defined via the javascript client API) color bucket style Dynamically changing a layer's rendering style Specifying which attribute value to use in determining the bucket, and hence style, for a feature (FoI) The result is shown in the screenshot below. The states layer was defined, and stored in the user_sdo_themes view of the mvdemo schema, using MapBuilder. The underlying table is defined as SQL> desc states_32775  Name                                      Null?    Type ----------------------------------------- -------- ----------------------------  STATE                                              VARCHAR2(26)  STATE_ABRV                                         VARCHAR2(2) FIPSST                                             VARCHAR2(2) TOTPOP                                             NUMBER PCTSMPLD                                           NUMBER LANDSQMI                                           NUMBER POPPSQMI                                           NUMBER ... MEDHHINC NUMBER AVGHHINC NUMBER GEOM32775 MDSYS.SDO_GEOMETRY We'll use the TOTPOP column value in the advanced (color bucket) style for rendering the states layers. The predefined theme (US_STATES_BI) is defined as follows. SQL> select styling_rules from user_sdo_themes where name='US_STATES_BI'; STYLING_RULES -------------------------------------------------------------------------------- <?xml version="1.0" standalone="yes"?> <styling_rules highlight_style="C.CB_QUAL_8_CLASS_DARK2_1"> <hidden_info> <field column="STATE" name="Name"/> <field column="POPPSQMI" name="POPPSQMI"/> <field column="TOTPOP" name="TOTPOP"/> </hidden_info> <rule column="TOTPOP"> <features style="states_totpop"> </features> <label column="STATE_ABRV" style="T.BLUE_SERIF_10"> 1 </label> </rule> </styling_rules> SQL> The theme definition specifies that the state, poppsqmi, totpop, state_abrv, and geom columns will be queried from the states_32775 table. The state_abrv value will be used to label the state while the totpop value will be used to determine the color-fill from those defined in the states_totpop advanced style. The states_totpop style, which we will not use in our demo, is defined as shown below. SQL> select definition from user_sdo_styles where name='STATES_TOTPOP'; DEFINITION -------------------------------------------------------------------------------- <?xml version="1.0" ?> <AdvancedStyle> <BucketStyle> <Buckets default_style="C.S02_COUNTRY_AREA"> <RangedBucket seq="0" label="10K - 5M" low="10000" high="5000000" style="C.SEQ6_01" /> <RangedBucket seq="1" label="5M - 12M" low="5000001" high="1.2E7" style="C.SEQ6_02" /> <RangedBucket seq="2" label="12M - 20M" low="1.2000001E7" high="2.0E7" style="C.SEQ6_04" /> <RangedBucket seq="3" label="&gt; 20M" low="2.0000001E7" high="5.0E7" style="C.SEQ6_05" /> </Buckets> </BucketStyle> </AdvancedStyle> SQL> The demo defines additional advanced styles via the OM.style object and methods and uses those instead when rendering the states layer.   Now let's look at relevant snippets of code that defines the map extent and zoom levels (i.e. the OM.universe),  loads the states predefined vector layer (OM.layer), and sets up the advanced (color bucket) style. Defining the map extent and zoom levels. function initMap() {   //alert("Initialize map view");     // define the map extent and number of zoom levels.   // The Universe object is similar to the map tile layer configuration   // It defines the map extent, number of zoom levels, and spatial reference system   // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined   // The Universe must be defined when there is no underlying map tile layer.   // When there is a map tile layer then that defines the map extent, srid, and zoom levels.      var uni= new OM.universe.Universe(     {         srid : 32775,         bounds : new OM.geometry.Rectangle(                         -3280000, 170000, 2300000, 3200000, 32775),         numberOfZoomLevels: 8     }); The srid specifies the spatial reference system which is Equal-Area Projection (United States). SQL> select cs_name from cs_srs where srid=32775 ; CS_NAME --------------------------------------------------- Equal-Area Projection (United States) The bounds defines the map extent. It is a Rectangle defined using the lower-left and upper-right coordinates and srid. Loading and displaying the states layer This is done in the states() function. The full code is at the end of this post, however here's the snippet which defines the states VectorLayer.     // States is a predefined layer in user_sdo_themes     var  layer2 = new OM.layer.VectorLayer("vLayer2",     {         def:         {             type:OM.layer.VectorLayer.TYPE_PREDEFINED,             dataSource:"mvdemo",             theme:"us_states_bi",             url: baseURL,             loadOnDemand: false         },         boundingTheme:true      }); The first parameter is a layer name, the second is an object literal for a layer config. The config object has two attributes: the first is the layer definition, the second specifies whether the layer is a bounding one (i.e. used to determine the current map zoom and center such that the whole layer is displayed within the map window) or not. The layer config has the following attributes: type - specifies whether is a predefined one, a defined via a SQL query (JDBC), or in a json-format file (DATAPACK) theme - is the predefined theme's name url - is the location of the mapviewer server loadOnDemand - specifies whether to load all the features or just those that lie within the current map window and load additional ones as needed on a pan or zoom The code snippet below dynamically defines an advanced style and then uses it, instead of the 'states_totpop' style, when rendering the states layer. // override predefined rendering style with programmatic one    var theRenderingStyle =      createBucketColorStyle('YlBr5', colorSeries, 'States5', true);   // specify which attribute is used in determining the bucket (i.e. color) to use for the state   // It can be an array because the style could be a chart type (pie/bar)   // which requires multiple attribute columns     // Use the STATE.TOTPOP column (aka attribute) value here    layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); The style itself is defined in the createBucketColorStyle() function. Dynamically defining an advanced style The advanced style used here is a bucket color style, i.e. a color style is associated with each bucket. So first we define the colors and then the buckets.     numClasses = colorSeries[colorName].classes;    // create Color Styles    for (var i=0; i < numClasses; i++)    {         theStyles[i] = new OM.style.Color(                      {fill: colorSeries[colorName].fill[i],                        stroke:colorSeries[colorName].stroke[i],                       strokeOpacity: useGradient? 0.25 : 1                      });    }; numClasses is the number of buckets. The colorSeries array contains the color fill and stroke definitions and is: var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": {   classes:3,                  fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8],                  stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6]   }, "YlBl5": {   classes:5,                  fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494],                  stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85]   }, //multi-hue color scheme #11 YlBr.  "YlBr3": {classes:3,                  fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E],                  stroke:[0xE6DEA9, 0xE5B047, 0xC5360D]   }, "YlBr5": {classes:5,                  fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404],                  stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04]     }, etc. Next we create the bucket style.    bucketStyleDef = {       numClasses : colorSeries[colorName].classes, //      classification: 'custom',  //since we are supplying all the buckets //      buckets: theBuckets,       classification: 'logarithmic',  // use a logarithmic scale       styles: theStyles,       gradient:  useGradient? 'linear' : 'off' //      gradient:  useGradient? 'radial' : 'off'     };    theBucketStyle = new OM.style.BucketStyle(bucketStyleDef);    return theBucketStyle; A BucketStyle constructor takes a style definition as input. The style definition specifies the number of buckets (numClasses), a classification scheme (which can be equal-ranged, logarithmic scale, or custom), the styles for each bucket, whether to use a gradient effect, and optionally the buckets (required when using a custom classification scheme). The full source for the demo <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Oracle Maps V2 Thematic Map Demo</title> <script src="http://localhost:8080/mapviewer/jslib/v2/oraclemapsv2.js" type="text/javascript"> </script> <script type="text/javascript"> //var $j = jQuery.noConflict(); var baseURL="http://localhost:8080/mapviewer"; // location of mapviewer OM.gv.proxyEnabled =false; // no mvproxy needed OM.gv.setResourcePath(baseURL+"/jslib/v2/images/"); // location of resources for UI elements like nav panel buttons var map = null; // the client mapviewer object var statesLayer = null, stateCountyLayer = null; // The vector layers for states and counties in a state var layerName="States"; // initial map center and zoom var mapCenterLon = -20000; var mapCenterLat = 1750000; var mapZoom = 2; var mpoint = new OM.geometry.Point(mapCenterLon,mapCenterLat,32775); var currentPalette = null, currentStyle=null; // set an onchange listener for the color palette select list // initialize the map // load and display the states layer $(document).ready( function() { $("#demo-htmlselect").change(function() { var theColorScheme = $(this).val(); useSelectedColorScheme(theColorScheme); }); initMap(); states(); } ); /** * color series from ColorBrewer site (http://colorbrewer2.org/). */ var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": { classes:3, fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8], stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6] }, "YlBl5": { classes:5, fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494], stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85] }, //multi-hue color scheme #11 YlBr. "YlBr3": {classes:3, fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E], stroke:[0xE6DEA9, 0xE5B047, 0xC5360D] }, "YlBr5": {classes:5, fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404], stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04] }, // single-hue color schemes (blues, greens, greys, oranges, reds, purples) "Purples5": {classes:5, fill:[0xf2f0f7, 0xcbc9e2, 0x9e9ac8, 0x756bb1, 0x54278f], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Blues5": {classes:5, fill:[0xEFF3FF, 0xbdd7e7, 0x68aed6, 0x3182bd, 0x18519C], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greens5": {classes:5, fill:[0xedf8e9, 0xbae4b3, 0x74c476, 0x31a354, 0x116d2c], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greys5": {classes:5, fill:[0xf7f7f7, 0xcccccc, 0x969696, 0x636363, 0x454545], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Oranges5": {classes:5, fill:[0xfeedde, 0xfdb385, 0xfd8d3c, 0xe6550d, 0xa63603], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Reds5": {classes:5, fill:[0xfee5d9, 0xfcae91, 0xfb6a4a, 0xde2d26, 0xa50f15], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] } }; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) { var theBucketStyle; var bucketStyleDef; var theStyles = []; var theColors = []; var aBucket, aStyle, aColor, aRange; var numClasses ; numClasses = colorSeries[colorName].classes; // create Color Styles for (var i=0; i < numClasses; i++) { theStyles[i] = new OM.style.Color( {fill: colorSeries[colorName].fill[i], stroke:colorSeries[colorName].stroke[i], strokeOpacity: useGradient? 0.25 : 1 }); }; bucketStyleDef = { numClasses : colorSeries[colorName].classes, // classification: 'custom', //since we are supplying all the buckets // buckets: theBuckets, classification: 'logarithmic', // use a logarithmic scale styles: theStyles, gradient: useGradient? 'linear' : 'off' // gradient: useGradient? 'radial' : 'off' }; theBucketStyle = new OM.style.BucketStyle(bucketStyleDef); return theBucketStyle; } function initMap() { //alert("Initialize map view"); // define the map extent and number of zoom levels. // The Universe object is similar to the map tile layer configuration // It defines the map extent, number of zoom levels, and spatial reference system // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined // The Universe must be defined when there is no underlying map tile layer. // When there is a map tile layer then that defines the map extent, srid, and zoom levels. var uni= new OM.universe.Universe( { srid : 32775, bounds : new OM.geometry.Rectangle( -3280000, 170000, 2300000, 3200000, 32775), numberOfZoomLevels: 8 }); map = new OM.Map( document.getElementById('map'), { mapviewerURL: baseURL, universe:uni }) ; var navigationPanelBar = new OM.control.NavigationPanelBar(); map.addMapDecoration(navigationPanelBar); } // end initMap function states() { //alert("Load and display states"); layerName = "States"; if(statesLayer) { // states were already visible but the style may have changed // so set the style to the currently selected one var theData = $('#demo-htmlselect').val(); setStyle(theData); } else { // States is a predefined layer in user_sdo_themes var layer2 = new OM.layer.VectorLayer("vLayer2", { def: { type:OM.layer.VectorLayer.TYPE_PREDEFINED, dataSource:"mvdemo", theme:"us_states_bi", url: baseURL, loadOnDemand: false }, boundingTheme:true }); // add drop shadow effect and hover style var shadowFilter = new OM.visualfilter.DropShadow({opacity:0.5, color:"#000000", offset:6, radius:10}); var hoverStyle = new OM.style.Color( {stroke:"#838383", strokeThickness:2}); layer2.setHoverStyle(hoverStyle); layer2.setHoverVisualFilter(shadowFilter); layer2.enableFeatureHover(true); layer2.enableFeatureSelection(false); layer2.setLabelsVisible(true); // override predefined rendering style with programmatic one var theRenderingStyle = createBucketColorStyle('YlBr5', colorSeries, 'States5', true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state // It can be an array because the style could be a chart type (pie/bar) // which requires multiple attribute columns // Use the STATE.TOTPOP column (aka attribute) value here layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); currentPalette = "YlBr5"; var stLayerIdx = map.addLayer(layer2); //alert('State Layer Idx = ' + stLayerIdx); map.setMapCenter(mpoint); map.setMapZoomLevel(mapZoom) ; // display the map map.init() ; statesLayer=layer2; // add rt-click event listener to show counties for the state layer2.addListener(OM.event.MouseEvent.MOUSE_RIGHT_CLICK,stateRtClick); } // end if } // end states function setStyle(styleName) { // alert("Selected Style = " + styleName); // there may be a counties layer also displayed. // that wll have different bucket ranges so create // one style for states and one for counties var newRenderingStyle = null; if (layerName === "States") { if(/3/.test(styleName)) { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States3', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties3', false); } else { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States5', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties5', false); } statesLayer.setRenderingStyle(newRenderingStyle, ["TOTPOP"]); if (stateCountyLayer) stateCountyLayer.setRenderingStyle(currentStyle, ["TOTPOP"]); } } // end setStyle function stateRtClick(evt){ var foi = evt.feature; //alert('Rt-Click on State: ' + foi.attributes['_label_'] + // ' with pop ' + foi.attributes['TOTPOP']); // display another layer with counties info // layer may change on each rt-click so create and add each time. var countyByState = null ; // the _label_ attribute of a feature in this case is the state abbreviation // we will use that to query and get the counties for a state var sqlText = "select totpop,geom32775 from counties_32775_moved where state_abrv="+ "'"+foi.getAttributeValue('_label_')+"'"; // alert(sqlText); if (currentStyle === null) currentStyle = createBucketColorStyle('YlBr5', colorSeries, 'Counties5', false); /* try a simple style instead new OM.style.ColorStyle( { stroke: "#B8F4FF", fill: "#18E5F4", fillOpacity:0 } ); */ // remove existing layer if any if(stateCountyLayer) map.removeLayer(stateCountyLayer); countyByState = new OM.layer.VectorLayer("stCountyLayer", {def:{type:OM.layer.VectorLayer.TYPE_JDBC, dataSource:"mvdemo", sql:sqlText, url:baseURL}}); // url:baseURL}, // renderingStyle:currentStyle}); countyByState.setVisible(true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state countyByState.setRenderingStyle(currentStyle, ["TOTPOP"]); var ctLayerIdx = map.addLayer(countyByState); // alert('County Layer Idx = ' + ctLayerIdx); //map.addLayer(countyByState); stateCountyLayer = countyByState; } // end stateRtClick function useSelectedColorScheme(theColorScheme) { if(map) { // code to update renderStyle goes here //alert('will try to change render style'); setStyle(theColorScheme); } else { // do nothing } } </script> </head> <body bgcolor="#b4c5cc" style="height:100%;font-family:Arial,Helvetica,Verdana"> <h3 align="center">State population thematic map </h3> <div id="demo" style="position:absolute; left:68%; top:44px; width:28%; height:100%"> <HR/> <p/> Choose Color Scheme: <select id="demo-htmlselect"> <option value="YlBl3"> YellowBlue3</option> <option value="YlBr3"> YellowBrown3</option> <option value="YlBl5"> YellowBlue5</option> <option value="YlBr5" selected="selected"> YellowBrown5</option> <option value="Blues5"> Blues</option> <option value="Greens5"> Greens</option> <option value="Greys5"> Greys</option> <option value="Oranges5"> Oranges</option> <option value="Purples5"> Purples</option> <option value="Reds5"> Reds</option> </select> <p/> </div> <div id="map" style="position:absolute; left:10px; top:50px; width:65%; height:75%; background-color:#778f99"></div> <div style="position:absolute;top:85%; left:10px;width:98%" class="noprint"> <HR/> <p> Note: This demo uses HTML5 Canvas and requires IE9+, Firefox 10+, or Chrome. No map will show up in IE8 or earlier. </p> </div> </body> </html>

    Read the article

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • No MAU required on a T4

    - by jsavit
    Cryptic background One of the powerful features of the T-series servers is its hardware crypto acceleration, which dramatically speeds up the compute intensive algorithms needed to encrypt and decrypt data. Previously, administrators setting up logical domains on older T-series servers had to explicitly assign crypto resources (called "MAU" for historical reasons from the T1 chip that had "modular arithmetic units") to domains that had a significant crypto workload (say, an SSL based web server). This could be an administrative burden, as you had to choose which domains got the crypto units, and issue the appropriate ldm set-mau N mydomain commands. The T4 changes things The T4 is fast. Really fast. Its clock rate and out-of-order (OOO) execution that provides the single-thread performance that T-series machines previously did not have. If you have any preconceptions about T-series performance, or SPARC in general, based on the older servers (which, it must be said, were absolutely outstanding for multi-threaded applications), those assumptions are now obsolete. The T4 provides outstanding. performance for all kinds of workload, as illustrated at https://blogs.oracle.com/bestperf. While we all focused on this (did I mention the T4 is fast?), another feature of the T4 went largely unnoticed: The T4 servers have crypto acceleration "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". This is way way better since you have crypto everywhere by default without having to manage it like a discrete and limited resource. It's a feature of the processor, like doing an integer add. With T4, there is no management necessary, you just have HW crypto everywhere all the time seamlessly. This change hasn't been widely advertised, and some administrators have wondered why there were unable to assign a MAU to a domain as they did with T2 and T3 machines. The answer is that there is no longer any separate MAU, so you don't have to take any action at all - just leave the default of 0. Summary Besides being much faster than its predecessors, the T4 also integrates hardware crypto acceleration so its seamlessly available to applications, whether domains are being used or not. Administrators no longer have to control how they are allocated - it "just happens"

    Read the article

  • A Look Back at 2010 Predictions

    - by David Dorf
    Now is the time of year people make their predictions for next year, but before I start thinking about 2011 it's worth a look back to see how my predictions for 2010 fared. 1. Borders and Blockbuster bite the dust. I would have never predicted a strong brand such as Circuit City could die, but now I know it can happen to anyone. Borders has lost the battle with Barnes & Noble and Blockbuster has lost to Netflix. And just to be sure, Amazon put an extra nail in each coffin. Borders received additional investment from Bennett LeBow to keep it afloat, but the stock is down around $1.25 with no profits in sight. Blockbuster filed for bankruptcy back in September. 2. Every retailer finally has a page on Facebook... but very few figure out how to keep fans engaged. Retailer postings become noise, and fans start to unsubscribe. Twitter goes in the same direction. A few standout retailers will figure out how to use social media, and the rest will remain dumbfounded. Most retailers are on the Facebook bandwagon, and their fan bases seem to be increasing thanks to promotions like The Gap's logo redesign, Lowes' black Friday sneak peak, and Walmart's Crowd Savers. There are several examples of f-commerce advancements, including some interesting integrations from Amazon.3. Smartphones consolidate and grow. More and more people will step-up to smartphones, most of which will choose iPhone, Blackberry, and Android phones. Other smartphones will vanish, and networks will start to strain. But retailers will finally embrace mobile as the next big channel. Retail marketing departments will build mobile apps without the help of their IT department, and eventually they will get into a bind. Android has been on a tear lately stealing market share from Blackberry. Palm and Microsoft are trending down, and Apple is holding steady. Smartphone sales are up 15% and expected to continue. Retailers understand the importance of mobile, and some innovative applications have been produced this year. 4. Google helps the little guys. Google will push its Favorite Places project to help give exposure to small retailers and restaurants. They will enable small retailers to act like big ones by providing storefronts, detailed product information, and coupons for consumers. Google will find a way to bring augmented reality to the masses. I can't say I've seen much new from Google regarding Favorite Places, but they've continued to push local product search. From the PC or smartphone, consumers can search for products and see which nearby stores have it stock. Oracle Retail even productized an integration to Google to support this effort. I suppose if Google ever buys Groupon then it will bring them even closer to local shopping. Google talked about augmented humanity, but that has nothing to do with augmented reality. 5. Steve Jobs Is Bugs Bunny and Steve Ballmer is Elmer Fudd. (OK, I stole that headline from an InformationWeek article. I couldn't resist.) Both Apple and Microsoft will continue to open new stores, but only Apple will show real growth. POSReady 2009 (formerly WEPOS) will continue to share the POS market with Linux. The iPhone and iPod will continue to capture market share, but there won't be an Apple tablet. There won't be an Apple tablet? What was I thinking? While Apple has well over 300 stores, there are less than 10 Microsoft stores. Initial impressions show that even though Microsoft is locating its store near Apple Stores, they are not converting customers, with shoppers citing a lack of assortment and high prices. 6. Consolidation of e-commerce software providers. Software vendors in the areas of search, reviews, online call-centers, payments, and e-commerce will consolidate, partly driven by the success of m-commerce and SaaS. Amazon will find someone else to buy, and eBay will continue to lose momentum. Consolidation of e-commerce providers continued with IBM acquiring Sterling Commerce and CoreMetrics, and Oracle recently announcing the acquisition of ATG. Amazon grabbed Zappos, Woot, and Diapers.com to continue its dominance of online selling. While eBay's Marketplace growth may have slowed, its PayPal division is doing quite well, fueled in part by demand for mobile payments. 7. Book publishers mirror music labels. Just as the iPod brought digital downloads to the masses, the Kindle and Nook will power the e-book revolution. Books will continue to use DRM for a few more years before following the path of music. Publishers will try to preserve the margins of hardbacks by associating e-book releases with paperbacks. Amazon has done a good job providing e-reader clients for smartphones, PCs, and tablets. Competition from Barnes & Noble has forced Amazon to support book loaning, and both companies are making it easier for people to publish ebooks (with or without DRM). Progress is slow but steady. 8. NFC makes inroads, RFID treads water. Near Field Communications start to appear in mobile phones, and retailers beta test its use for payments and loyalty programs. RFID tag costs come down a bit, but not enough to spur accelerated adoption.Nokia announced plans to offer NFC-enabled phones in 2011, and rumors are swirling about NFC in the upcoming iPhone.  I think NFC is heading in the right direction, and I've heard more interest from retailers about specialized uses for RFID.9. Digital Signage goes the way of augmented reality. People use their camera phones to leave geo-tagged notes all over cities, rating stores and restaurants, and "painting" graffiti. But people get tired of holding their phones in front of their faces, so AR glasses are offered in much the same way bluetooth headsets emerged. Retailers experiement with in-store advertising using AR. Several retailers like Pizza Hut, Benetton, and Target have experimented with AR but its still somewhat of a gimmick used by marketing.  I think this prediction is a year or two too early. 10. JDA flip-flops again. After announcing their embracing of the .Net architecture, then switching to J2EE after the Manugistics acquisition, JDA will finally decide to standardize on Apple's Objective C. Everything will be ported to the iPhone and be available on the AppStore. After all, there's not much left to try. This was, of course, a joke but the sentiment is still valid.  JDA seems more supply-chain focused than retail focused, which is a an outcrop if their i2 acquisition.  Of the 10 predictions, I'm going to say I got 6 somewhat correct.  (Don't you just love grading your own paper?)  Soon I'll post my predictions for 2011 so be on the lookout.  Until then here's one more prediction:  Va Tech beats Stanford in the Orange Bowl -- count on it!

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >