Search Results

Search found 41254 results on 1651 pages for 'oracle oracle 12c database'.

Page 501/1651 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • WebCenter Customer Advisory Board meetings kick off Oracle Open World 2012!

    - by Lance711
    Welcome to OpenWorld! OpenWorld 2012 got underway today with a series of meetings with the members of the WebCenter Customer Advisory Board. Led by the WebCenter Product Management team, these meetings are a great way for the product team and customers to directly interact and discuss real-life business challenges, product details and to discuss upcoming features and functionality. This year, board members participated in discussions around live demos around product enhancements that will be featured throughout the coming week. Highlights included a variety of new mobile and social solutions, a great new user interface for WebCenter Content plus new Portal and Sites functionality that makes the experience for the everyday user a lot more pleasant. The day kicked off with Roel Stalman, VP of Product Management, giving a detailed overview of what’s new in WebCenter. Given all the improvements to discuss, this session went over 2 hours! Roel showcased the brand new UI for Content, Portal and Sites. He also gave live demos of the new mobile apps for WebCenter Content, Portal and the Oracle Social Network.  The attendees then broke into sub-groups in order to deep-dive with Product Management for the Portal, Sites, and Content product areas on specific functionality and application integrations. If you are here in San Francisco this week for OpenWorld, I definitely recommend stopping by the WebCenter area in the Moscone West Exhibition Hall to see some of this new functionality for yourself. And be sure to check out the WebCenter sessions throughout the week as those give us a chance to discuss direction and strategy, answer your questions and get your feedback and ideas. For those of you could not make it to OpenWorld this year, we miss you! You can stay in touch with what is happening via this blog and by following #oow and #webcenter on Twitter. Additionally, we will be rolling out details on upcoming products and release info over the coming months via this blog and web seminars. Stay tuned!

    Read the article

  • Do you want to know more about Oracle Learning Management 12.1?

    - by anders.northeved
    Many of you have upgraded to OLM 12.1 or are in the process of doing so. We have been asked if it was possible to arrange a couple of webcast describing the new functions and features in OLM 12.1 – and of course it is. We will do two webcasts: One on the new features and functions in OLM 12.1.1 and another one on the new features and functions in OLM 12.1.2 + 12.1.3. Each webcast will last for approx. 45 min and afterwards there will be a Q&A session for as long as you have questions! Everybody interested in participating is very welcome to join. Just send an e-mail with the following information to [email protected]: List of participants from your organization Your organization’s current status: Which OLM version you are on and if you have current upgrade plans then we’ll send you a mail with information on how to join. Webcast on OLM 12.1.1 new features: Monday 28th March 5pm CET (8.30pm IST; 4pm UK; 11am EST; 8am PST) Webcast on OLM 12.1.2+OLM 12.1.3 new features: Tuesday 29th March 5pm CET (8.30pm IST; 4pm UK; 11am EST; 8am PST) We are looking forward to your participation!

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Latest Patch Set Updates for EPM

    - by Lia Nowodworska - Oracle
    Here is the list of the latest Patch Set Updates for EPM products: The "Patch" ID links will access the patch directly for download from "My Oracle Support" (login required). Patch 18490422  for Hyperion Financial Management 11.1.2.2.307 Patch 18685108  for Oracle Hyperion Profitability and Cost Management release 11.1.2.3.501 Patch 18400594  for Hyperion Strategic Finance 11.1.2.3.501 Patch 18505475 for Hyperion Essbase Admin Services Server 11.1.2.3.501  Patch 18505468 for Hyperion Essbase Admin Services Console MSI 11.1.2.3.501  Patch 18505499 for Hyperion Essbase RTC 11.1.2.3.501  Patch 18505489 for Hyperion Essbase Server 11.1.2.3.501  Patch 18505494 for Hyperion Essbase Client 11.1.2.3.501  Patch 18505483 for Hyperion Essbase Client MSI 11.1.2.3.501  Patch 18505515 for Hyperion Analytic Provider Services 11.1.2.3.501 Patch 18505506 for Hyperion Essbase Studio Server 11.1.2.3.501 Patch 18505503 for Hyperion Essbase Studio Console MSI 11.1.2.3.501 For the latest Enterprise Performance Management Patch Set Updates visit: Oracle Hyperion EPM Products [Doc ID 1400559.1]

    Read the article

  • JOB OF THE WEEK

    - by jessica.ebbelaar(at)oracle.com
    Help Desk Support Specialist - Budapest (Hungary) Do you have French and English languages skills, and are living in Hungary? Then this could be the role for you to start your career with at Oracle. We now have an opening as Help Desk Support Specialist in our office Budapest. In this role you will respond to requests for technical assistance by phone, email and/or using our help desk management system We are looking for candidates with a passion for Customer Service. Next to that planning & organizing, problem solving, time management are important competencies to have for this role. If you already had some exposure to Bio Pharmaceutical or Clinical companies that is a big plus. It is a great opportunity not only for graduates, but for all who want to start their career at Oracle and a unique chance to work in multinational team together with colleagues from all over the world! If you are interested in this position, read more here! For all of our other vacancies and internships, please visit https://campus.oracle.com.

    Read the article

  • Managed Cloud Services Wins Another Prestigious Industry Award

    - by Dori DiMassimo-Oracle
    Over the last 90 days, Oracle Managed Cloud Services has been the proud recipient of TWO prestigious industry awards for service excellence and customer value leadership.  The most recent award is last month's 2014 Frost & Sullivan Best Practice Award - North America Managed Cloud Customer Value Leadership Award, which rated Oracle Managed Cloud Services as the clear leader versus other providers; Managed Cloud received an "exceptional" rating in 9 of 10 evaluation categories.  The research report  is an excellent look at our industry and what is valued by cloud customers looking for a managed solution.   In April, Managed Cloud was a repeat winner of the Outsourcing Excellence Award - 2014 Outsourcing Excellence Award - Best ITO Infrastructure (Sony Computer Entertainment America).  Last year we won the award for Best Cloud: 2013 Outsourcing Excellence Award - Best Cloud (Take-Two Interactive)  These awards are a great testimony of the transformation of Managed Cloud Services to a true Cloud-based business and a strategic and relevant part of the Oracle Cloud Solutions portfolio.  Frost & Sullivan, in particular, recognizes our vision and our capability of successfully managing business transactions in the cloud.

    Read the article

  • png image store in database and retrieve in android 1.5

    - by hany
    hai, I am new to android. I have problem. This is my code but it will not work, the problem is in view binder. Please correct it. // this is my activity package com.android.Fruits2; import java.util.ArrayList; import java.util.HashMap; import android.app.ListActivity; import android.database.Cursor; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.Bundle; import android.widget.SimpleAdapter; import android.widget.SimpleCursorAdapter; import android.widget.SimpleAdapter.ViewBinder; public class Fruits2 extends ListActivity { private DBhelper mDB; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // setContentView(R.layout.main); mDB = new DBhelper(this); mDB.Reset(); Bitmap img = BitmapFactory.decodeResource(getResources(), R.drawable.icon); mDB.createPersonEntry(new PersonData(img, "Harsha", 24,"mca")); String[] columns = {mDB.KEY_ID, mDB.KEY_IMG, mDB.KEY_NAME, mDB.KEY_AGE, mDB.KEY_STUDY}; String table = mDB.PERSON_TABLE; Cursor c = mDB.getHandle().query(table, columns, null, null, null, null, null); startManagingCursor(c); SimpleCursorAdapter adapter = new SimpleCursorAdapter(this, R.layout.data, c, new String[] {mDB.KEY_IMG, mDB.KEY_NAME, mDB.KEY_AGE, mDB.KEY_STUDY}, new int[] {R.id.img, R.id.name, R.id.age,R.id.study}); adapter.setViewBinder( new MyViewBinder()); setListAdapter(adapter); } } //my viewbinder package com.android.Fruits2; import android.database.Cursor; import android.graphics.BitmapFactory; import android.view.View; import android.widget.ImageView; import android.widget.SimpleCursorAdapter; public class MyViewBinder implements SimpleCursorAdapter.ViewBinder { public boolean setViewValue(View view, Cursor cursor, int columnIndex) { if( (view instanceof ImageView) ) { ImageView iv = (ImageView) view; byte[] img = cursor.getBlob(columnIndex); iv.setImageBitmap(BitmapFactory.decodeByteArray(img, 0, img.length)); return true; } return false; } } // data package com.android.Fruits2; import android.graphics.Bitmap; public class PersonData { private Bitmap bmp; private String name; private int age; private String study; public PersonData(Bitmap b, String n, int k, String v) { bmp = b; name = n; age = k; study = v; } public Bitmap getBitmap() { return bmp; } public String getName() { return name; } public int getAge() { return age; } public String getStudy() { return study; } } //dbhelper package com.android.Fruits2; import java.io.ByteArrayOutputStream; import android.content.ContentValues; import android.content.Context; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; import android.graphics.Bitmap; import android.provider.BaseColumns; public class DBhelper { public static final String KEY_ID = BaseColumns._ID; public static final String KEY_NAME = "name"; public static final String KEY_AGE = "age"; public static final String KEY_STUDY = "study"; public static final String KEY_IMG = "image"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "PersonalDB"; private static final int DATABASE_VERSION = 1; public static final String PERSON_TABLE = "Person"; private static final String CREATE_PERSON_TABLE = "create table "+PERSON_TABLE+" (" +KEY_ID+" integer primary key autoincrement, " +KEY_IMG+" blob not null, " +KEY_NAME+" text not null , " +KEY_AGE+" integer not null, " +KEY_STUDY+" text not null);"; private final Context mCtx; private boolean opened = false; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_PERSON_TABLE); } public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS "+PERSON_TABLE); onCreate(db); } } public void Reset() { openDB(); mDbHelper.onUpgrade(this.mDb, 1, 1); closeDB(); } public DBhelper(Context ctx) { mCtx = ctx; mDbHelper = new DatabaseHelper(mCtx); } private SQLiteDatabase openDB() { if(!opened) mDb = mDbHelper.getWritableDatabase(); opened = true; return mDb; } public SQLiteDatabase getHandle() { return openDB(); } private void closeDB() { if(opened) mDbHelper.close(); opened = false; } public void createPersonEntry(PersonData about) { openDB(); ByteArrayOutputStream out = new ByteArrayOutputStream(); about.getBitmap().compress(Bitmap.CompressFormat.PNG, 100, out); ContentValues cv = new ContentValues(); cv.put(KEY_IMG, out.toByteArray()); cv.put(KEY_NAME, about.getName()); cv.put(KEY_AGE, about.getAge()); cv.put(KEY_STUDY, about.getStudy()); mDb.insert(PERSON_TABLE, null, cv); closeDB(); } } //data.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"> <ImageView android:id = "@+id/img" android:layout_width = "wrap_content" android:layout_height = "wrap_content" > </ImageView> <TextView android:id = "@+id/name" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" > </TextView> <TextView android:id = "@+id/age" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" /> <TextView android:id = "@+id/study" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" /> </LinearLayout> When I run this in android 1.6 and 2.1, it works. But when I run in android 1.5, not work. My application is android 1.5. Please correct and send code to me. Thank you.

    Read the article

  • I get java.lang.NullPointerException when trying to get the contents of the database in Android

    - by ncountr
    I am using 8 EditText boxes from the NewCard.xml from which i am taking the values and when the save button is pressed i am storing the values into a database, in the same process of saving i am trying to get the values and present them into 8 different TextView boxes on the main.xml file and when i press the button i get an FC from the emulator and the resulting error is java.lang.NullPointerException. If Some 1 could help me that would be great, since i have never used databases and this is my first application for android and this is the only thing keepeng me to complete the whole thing and publish it on the market like a free app. Here's the full code from NewCard.java. public class NewCard extends Activity { private static String[] FROM = { _ID, FIRST_NAME, LAST_NAME, POSITION, POSTAL_ADDRESS, PHONE_NUMBER, FAX_NUMBER, MAIL_ADDRESS, WEB_ADDRESS}; private static String ORDER_BY = FIRST_NAME; private CardsData cards; EditText First_Name; EditText Last_Name; EditText Position; EditText Postal_Address; EditText Phone_Number; EditText Fax_Number; EditText Mail_Address; EditText Web_Address; Button New_Cancel; Button New_Save; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.newcard); cards = new CardsData(this); //Define the Cancel Button in NewCard Activity New_Cancel = (Button) this.findViewById(R.id.new_cancel_button); //Define the Cancel Button Activity/s New_Cancel.setOnClickListener ( new OnClickListener() { public void onClick(View arg0) { NewCancelDialog(); } } );//End of the Cancel Button Activity/s //Define the Save Button in NewCard Activity New_Save = (Button) this.findViewById(R.id.new_save_button); //Define the EditText Fields to Get Their Values Into the Database First_Name = (EditText) this.findViewById(R.id.new_first_name); Last_Name = (EditText) this.findViewById(R.id.new_last_name); Position = (EditText) this.findViewById(R.id.new_position); Postal_Address = (EditText) this.findViewById(R.id.new_postal_address); Phone_Number = (EditText) this.findViewById(R.id.new_phone_number); Fax_Number = (EditText) this.findViewById(R.id.new_fax_number); Mail_Address = (EditText) this.findViewById(R.id.new_mail_address); Web_Address = (EditText) this.findViewById(R.id.new_web_address); //Define the Save Button Activity/s New_Save.setOnClickListener ( new OnClickListener() { public void onClick(View arg0) { //Add Code For Saving The Attributes Into The Database try { addCard(First_Name.getText().toString(), Last_Name.getText().toString(), Position.getText().toString(), Postal_Address.getText().toString(), Integer.parseInt(Phone_Number.getText().toString()), Integer.parseInt(Fax_Number.getText().toString()), Mail_Address.getText().toString(), Web_Address.getText().toString()); Cursor cursor = getCard(); showCard(cursor); } finally { cards.close(); NewCard.this.finish(); } } } );//End of the Save Button Activity/s } //======================================================================================// //DATABASE FUNCTIONS private void addCard(String firstname, String lastname, String position, String postaladdress, int phonenumber, int faxnumber, String mailaddress, String webaddress) { // Insert a new record into the Events data source. // You would do something similar for delete and update. SQLiteDatabase db = cards.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(FIRST_NAME, firstname); values.put(LAST_NAME, lastname); values.put(POSITION, position); values.put(POSTAL_ADDRESS, postaladdress); values.put(PHONE_NUMBER, phonenumber); values.put(FAX_NUMBER, phonenumber); values.put(MAIL_ADDRESS, mailaddress); values.put(WEB_ADDRESS, webaddress); db.insertOrThrow(TABLE_NAME, null, values); } private Cursor getCard() { // Perform a managed query. The Activity will handle closing // and re-querying the cursor when needed. SQLiteDatabase db = cards.getReadableDatabase(); Cursor cursor = db.query(TABLE_NAME, FROM, null, null, null, null, ORDER_BY); startManagingCursor(cursor); return cursor; } private void showCard(Cursor cursor) { // Stuff them all into a big string long id = 0; String firstname = null; String lastname = null; String position = null; String postaladdress = null; long phonenumber = 0; long faxnumber = 0; String mailaddress = null; String webaddress = null; while (cursor.moveToNext()) { // Could use getColumnIndexOrThrow() to get indexes id = cursor.getLong(0); firstname = cursor.getString(1); lastname = cursor.getString(2); position = cursor.getString(3); postaladdress = cursor.getString(4); phonenumber = cursor.getLong(5); faxnumber = cursor.getLong(6); mailaddress = cursor.getString(7); webaddress = cursor.getString(8); } // Display on the screen add for each textView TextView ids = (TextView) findViewById(R.id.id); TextView fn = (TextView) findViewById(R.id.firstname); TextView ln = (TextView) findViewById(R.id.lastname); TextView pos = (TextView) findViewById(R.id.position); TextView pa = (TextView) findViewById(R.id.postaladdress); TextView pn = (TextView) findViewById(R.id.phonenumber); TextView fxn = (TextView) findViewById(R.id.faxnumber); TextView ma = (TextView) findViewById(R.id.mailaddress); TextView wa = (TextView) findViewById(R.id.webaddress); ids.setText(String.valueOf(id)); fn.setText(String.valueOf(firstname)); ln.setText(String.valueOf(lastname)); pos.setText(String.valueOf(position)); pa.setText(String.valueOf(postaladdress)); pn.setText(String.valueOf(phonenumber)); fxn.setText(String.valueOf(faxnumber)); ma.setText(String.valueOf(mailaddress)); wa.setText(String.valueOf(webaddress)); } //======================================================================================// //Define the Dialog that alerts you when you press the Cancel button private void NewCancelDialog() { new AlertDialog.Builder(this) .setMessage("Are you sure you want to cancel?") .setTitle("Cancel") .setCancelable(false) .setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { NewCard.this.finish(); } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }) .show(); }//End of the Cancel Dialog }

    Read the article

  • What is the way to go to fake my database layer in a unit test?

    - by Michel
    Hi, i have a question about unit testing. say i have a controller with one create method which puts a new customer in the database: //code a bit shortened public actionresult Create(Formcollection formcollection){ client c = nwe client(); c.Name = formcollection["name"]; ClientService.Save(c); { Clientservice would call a datalayer object and save it in the database. What i do now is create a database testscript and set my database in a know condition before testing. So when i test this method in the unit test, i know that there must be one more client in the database, and what it's name is. In short: ClientController cc = new ClientController(); cc.Create(new FormCollection (){name="John"}); //i know i had 10 clients before assert.areEqual(11, ClientService.GetNumberOfClients()); //the last inserted one is John assert.areEqual("John", ClientService.GetAllClients()[10].Name); So i've read that unit testing should not be hitting the database, i've setup an IOC for the database classes, but then what? I can create a fake database class, and make it do nothing. But then ofcourse my assertions will not work because if i say GetNumberOfClients() it will alwasy return X because it has no interaction with the fake database class used in the Create Method. I can also create a List of Clients in the fake database class, but as there will be two different instance created (one in the controller action and one in the unit test), they will have no interaction. What is the way to make this unit test work without a database?

    Read the article

  • SOA’s People Problem by Bob Rhubart

    - by JuergenKress
    Are reluctant passengers slowing down your SOA train? Based on my conversations with various experts in service-oriented architecture (SOA), the consensus is that SOA tools and technology have achieved a high level of maturity. Some even use the term industrialization to describe the current state of SOA. Given that scenario, one might assume that SOA has been wildly successful for every organization that has adopted its principles. Obviously SOA could not have achieved its current level of maturity and industrialization without having reached a tipping point in the volume of success stories to drive continued adoption. But some organizations continue to struggle with SOA. The problem, according to some experts, has little to do with tools or technologies. “One of the greatest challenges to implementing SOA has nothing to do with the intrinsic complexity behind a SOA technology platform,” says Oracle ACE Luis Augusto Weir, senior Oracle solution director at HCL AXON. “The real difficulty lies in dealing with people and processes from different parts of the business and aligning them to deliver enterprisewide solutions.” What can an organization do to meet that challenge? “Staff the right people,” says Weir. “For example, the role of a SOA architect should be as much about integrating people as it is about integrating systems. Dealing with people from different departments, backgrounds, and agendas is a huge challenge. The SOA architect role requires someone that not only has a sound architectural and technological background but also has charisma and human skills, and can communicate equally well to the business and technical teams.” The SOA architect’s communication skills are instrumental in establishing service orientation as the guiding principle across the organization. “A consistent architecture comprising both business services and IT services can comprehensively redefine the role of IT at the process level,” says Danilo Schmiedel, solution architect at Opitz Consulting. That helps to shift the focus from siloes to services and get SOA on track. To that end, Oracle ACE Director Lonneke Dikmans, a managing partner at Vennster, stresses the importance of replacing individual, uncoordinated projects with a focused program that promotes communication, cooperation, and service reuse. “Having support among lead developers and architects helps, as does having sponsors that see the business case and understand the strategic value,” she says. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Bob Rhubard,OTN,Lonneke Dikmans,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How can Google publish Dalvik as Java-language compatible since Java is a trademark?

    - by Bruno Chagas
    According to this thread Java and JVM license You can write a compiler that implements the Java Language Specification or write a JVM that implements the Java Virtual Machine specification, but when you officially want to call it "Java", you have to prove it is compatible by passing the tests of the TCK (technology compatibility kit) and pay for a license from Oracle. So, how can Google (or any other java implementation for that matter) claims that Dalvik is a Java virtual machine?

    Read the article

  • ADF TaskFlows Communications

    - by raghu.yadav
    Here is the list of various ADF Taskflows communication examples. http://www.oracle.com/technology/products/jdev/tips/fnimphius/CtxEvent/CtxEvent.html http://thepeninsulasedge.com/frank_nimphius/2008/02/07/adf-faces-rc-refreshing-a-table-ui-from-a-contextual-event/ http://www.oracle.com/technology/products/jdev/tips/fnimphius/generictreeselectionlistener/index.html http://www.oracle.com/technology/products/jdev/tips/fnimphius/syncheditformwithtree/index.html http://biemond.blogspot.com/2009/01/passing-adf-events-between-task-flow.html http://www.oracle.com/technology/products/jdev/tips/fnimphius/opentaskflowintab/index.html http://lucbors.blogspot.com/2010/03/adf-11g-contextual-event-framework.html http://thepeninsulasedge.com/blog/?cat=2 http://www.ora600.be/news/adf-contextual-events-11g-r1-ps1

    Read the article

  • What's a good scheme for multi-user database synchronization?

    - by Mason Wheeler
    I'm working on a system to allow multiple users to collaborate on an online project. Everything is fairly straightforward, except for keeping the users in sync. Each user has their own local copy of the project database, which allows them to make changes and test things out, and then send the updates to the central server. But this runs into the classic synchronization question: how do you keep two users from editing the same thing and stomping each other's work? I've got an idea that should work, but I wonder if there's a simpler way to do it. Here's the basic concept: All project data is stored in a relational database. Each row in the database has an owner. If the current user is not the owner, he can read but not write that row. (This is enforced client-side.) The user can send a request to the server to take ownership of a row, which will be granted if the server's copy says that the current owner is NULL, or to release ownership when they're done with it. It is not possible to release ownership without committing changes to the server. It is not possible to commit changes to the server without having first downloaded all outstanding changes to the server. When any changes are made to rows you own, a trigger marks that row as Dirty. When you commit changes, the database is scanned for all Dirty rows in all tables, and the data is serialized into an update file, which is posted to the server, and all rows are marked Clean. The server applies the updates on its end, and keeps the file around. When other users download changes, the server sends them the update files that they haven't already received. So, essentially this is a reinvention of version control on a relational database. (Sort of.) As long as taking ownership and applying updates to the server are guaranteed atomic changes, and the server verifies that some smart-aleck user didn't edit their local database so they could send an update for a row they don't have ownership of, it should be guaranteed to be correct, and with no need to worry about merges and merge conflicts. (I think.) Can anyone think of any problems with this scheme, or ways to do it better? (And no, "build [insert VCS here] into your project" is not what I'm looking for. I've thought of that already. VCSs work well with text, and not so well with other file formats, such as relational databases.)

    Read the article

  • Top 10 Architect Community Articles for May 2014

    - by OTN ArchBeat
    One of the things I get to do as an OTN community manager is work with members of the architect community who want to spend extra time pounding the keyboard and risking carpal tunnel syndrome to publish articles on OTN. These articles typically cover—but are not limited to—middleware technologies (the other OTN community managers cover other technologies and product areas). Naturally, we track the popularity of these articles and use that information to help guide editorial decisions about the many article submissions we get. The list below represents the Top 10 most popular architect community articles for May 2014. (This list reflects only articles published between June 1, 2013 and May 31, 2014.) Cookbook: Installing and Configuring Oracle BI Applications 11.1.1.7.1 [August 2013] by Mark Rittman and Kevin McGinley Enterprise Service Bus [July 2013] by Jürgen Kress, Berthold Maier, Hajo Normann, Danilo Schmeidel, Guido Schmutz, Bernd Trops, Clemens Utschig-Utschig, Torsten Winterberg Back Up a Thousand Databases Using Enterprise Manager Cloud Control 12c [January 2014] by Porus Homi Havewala Set Up and Manage Oracle Data Guard using Oracle Enterprise Manager Cloud Control 12c [August 2013] by Porus Homi Havewala SOA and Cloud Computing [April 2014] by Jürgen Kress, Berthold Maier, Hajo Normann, Danilo Schmeidel, Guido Schmutz, Bernd Trops, Clemens Utschig-Utschig, Torsten Winterberg Building a Responsive WebCenter Portal Application [April 2014] by JayJay Zheng Using WebLogic 12c with Netbeans IDE by Markus Eisele Making the Move from Oracle Warehouse Builder to Oracle Data Integrator 12c by Stewart Bryson A Real-World Guide to Invoking OSB and EDN using C++ and Web Services [January 2014] by Sebastian Lik-Keung Ma Why Would Anyone Want to be an Architect? [May 2014] by Bob Rhubart If this list leaves you feeling inspired to write a technical article for OTN, or if you have questions about the process, drop me line in the comments section, below. I'll get back to you ASAP.

    Read the article

  • Discoverer 11.1.1.4 Certified with E-Business Suite

    - by Steven Chan
    Oracle Business Intelligence Discoverer is an ad-hoc query, reporting, analysis, and Web-publishing tool that allows end-users to work directly with Oracle E-Business Suite OLTP data.Discoverer 11g (11.1.1.4) is now certified with Oracle E-Business Suite Release.  Discoverer 11.1.1.4 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.4.0, also known as FMW 11g Patchset 3.  Certified E-Business Suite releases are:EBS Release 11i 11.5.10.2 + ATG RUP 7 and higherEBS Release 12.0.6 and higherEBS Release 12.1.1 and higher

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of June 5

    - by Roxana Babiciu
    Smartsoft's OCEAN Payment Processing Solution achieves Oracle Exadata Optimized status. "Performance is the most important issue for our success in the market and running OCEAN on the Oracle Exadata Database Machine provides customers with extreme performance.” – Learn more Banking solution FORBIS Ltd’s FORPOST achieves Oracle Exadata, Exalogic and SuperCluster Ready Status. “We are glad to offer our current and future customers the newest features provided by Oracle Engineered Systems to achieve maximum reliability and speed operation.” – Learn more

    Read the article

  • Does the use of mongodb it easier to extend/change database driven applications?

    - by developer10214
    When an application is created which need to store data, an SQL database is used very often. So did I in a lot of asp.net applications. The resulting applications have often an ORM like the entity framework and maybe a business layer. So when such an application needs to be extended(let's say you have to add a comment property to an object), you have to change/extend the database, then the ORM and the business layer and so on. To deploy the changes you have to update the target database and the application. I know that things like code first and fluent can make this approach easier. I tried mongodb, I only used the standard driver and I had to extend some objects and all I had to do was changing the code. So it feels that such approaches are much easier to realize when using mongodb. I don't have much experience with larger applications an mongodb. I know that a SQL database or mongodb doesn't fit for all needs and both have their pros and cons. I want to know if my feeling is right, if yes I would choose rather choose mongodb than SQL database.

    Read the article

  • Webcenter and accessability

    - by angelo.santagata
    Got asked about webcenter accessability today, and the answer is obsolutely yes Oracle WebCenter 11gR1 has been tested against Oracle's Accessibility Guidelines (OGHAG), that combine guidelines of Section 508 and WCAG 1.0 'AA'. Here are the links to the product VPATs: VPAT for WebCenter Framework 11gR1: http://www.oracle.com/accessibility/templates/t1008.html VPAT for WebCenter Spaces 11gR1: http://www.oracle.com/accessibility/templates/t1684.html WebCenter 11gR1 PS1 has been tested against the current iteration of OGHAG, that factors in WCAG 2.0.

    Read the article

  • SQL SERVER What is Spatial Database? Developing with SQL Server Spatial and Deep Dive into Spatial

    What is Spatial Database?A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia)Today [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • links for 2010-05-21

    - by Bob Rhubart
    Stewart Bryson: Rittman Mead America is Recruiting "We don’t employ any junior consultants," writes Bryson, "so you would have to be highly experienced with some or all of the Oracle BI Stack (OBIEE, OWB, ODI, the Oracle Database, Hyperion), preferably have consulting experience, and excellent client-facing skills. Our consultants also provide all of our training services, and most of us write and speak at conferences, so being articulate and passionate about Oracle BI is another requirement. (tags: jobs employment consultants oracle businessintelligence)

    Read the article

  • Java Tools Support Offering Now Available

    - by Duncan Mills
    Great news! Developers can now purchase a combined support offering covering all three of Oracle's Java IDE options (JDeveloper, Oracle Enterprise Pack for Eclipse and NetBeans) in one premier support package. So no matter what tool, or mix of tools you use, you're covered! See Oracle Development Tools Support Offering for Details. A similar bundle is available for Oracle Solaris Development tools support which is detailed on the same page.

    Read the article

  • Oracle redonne un élan au Projet Lambda sur Java 7 et les closures : Interface evolution via "public

    Bonjour, Depuis quelques temps, on n'entendait plus trop parler des Closures et de leur ajout à Java 7. En réponse à David Flanagan qui s'inquiétait récemment du silence d'Oracle et de la stagnation du Project Lambda, Brian Goetz (Oracle) a soumis il y a quelques jours un document de réflexion sur la notion de virtual extension methods permettant d'ajouter sur une interface existante de nouvelles méthodes (avec des implémentations par défaut) sans casser le contrat avec le code existant.

    Read the article

  • While installing updates I get "Package operation Failed" for ubuntu 12.04

    - by user54395
    i get the following response in the details:- Please help nstallArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 427340 files and directories currently installed.) Preparing to replace thunderbird-trunk-globalmenu 14.0~a1~hg20120409r9862.91177-0ubuntu1~umd1 (using .../thunderbird-trunk-globalmenu_14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1_i386.deb) ... Unpacking replacement thunderbird-trunk-globalmenu ... Preparing to replace thunderbird-trunk 14.0~a1~hg20120409r9862.91177-0ubuntu1~umd1 (using .../thunderbird-trunk_14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1_i386.deb) ... Unpacking replacement thunderbird-trunk ... Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Setting up crossplatformui (1.0.27) ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 5286 package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.2.0-22-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-22-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-22-generic' make: * [modules] Error 2 dpkg: error processing crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up thunderbird-trunk (14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1) ... Setting up thunderbird-trunk-globalmenu (14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1) ... Errors were encountered while processing: crossplatformui Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up crossplatformui (1.0.27) ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 5541 package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.2.0-22-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-22-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-22-generic' make: * [modules] Error 2 dpkg: error processing crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >