Search Results

Search found 41254 results on 1651 pages for 'oracle oracle 12c database'.

Page 410/1651 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • ???????????????????! ????JVM?JRockit?|WebLogic Channel|??????

    - by ???02
    ??    ???????????????????JVM?Oracle JRockit??????????????????????????????????JRockit R28?????Flight Recorder?????????Flight Recorder?????????????????????? ¦??·Oracle WebLogic Server???????????????·??????????·???·Java???????????????????????????????·Oracle JRockit ??·Oracle JRockit Flight Recorder ??·JRockit Flight Recorder????????·JRockit???????????????·JRockit Mission Control????????·WLDF??????????Flight??????????????????????????????????http://www.oracle.com/technetwork/jp/ondemand/application-grid/20100805-jfr-251804-ja.pdf

    Read the article

  • ???????/?????!!?????? ~OracleDatabase????~

    - by Yusuke.Yamamoto
    ????? ??:2011/08/24 ??:??????/?? Oracle Database ?????????????????????????????????????????? ????????????????Oracle Database ???????????:Oracle Advanced Security/ ????????????????????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/Ango_08240930.wmv http://www.oracle.com/technetwork/jp/ondemand/db-basic/20100824-encryption-251749-ja.pdf

    Read the article

  • How to correctly set the ORACLE_HOME variable on Ubuntu 9.x?

    - by nowarninglabel
    I have the same problem as listed here: http://stackoverflow.com/questions/52239/oracle-lost-sysdba-password although I did not lose the password, I entered it twice in the configure script originally, and then when I went to login (localhost:8080/apex, password not accepted. I don't have anything in the database, I just want to install and use Oracle-XE. I have tried apt-get removing it twice and reinstalling, but if I try to run /etc/init.d/oracle-xe configure again and I get "Oracle Database 10g Express Edition is already configured" despite the second time removing any folders I could find for Oracle XE. I tried running sqlplus "/ as sysdba" but all I get is: "Error 6 initializing SQL*Plus Message file sp1.msb not found SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory" I tried setting the variable via export. (also tried set). Tried: "export ORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin/sqlplus" and all the subdirectories of that. Same error every time. What is the ORACLE_HOME supposed to be set to? The only reference I have seen either just say general or say the above up to the version number then "/db_1". I do no thave a db_1. Let me know if need any clarification. Don't understand what I did wrong in this process.

    Read the article

  • Problem with script to add/delete from database

    - by Jason
    Ok - the user enters a product code, price and name using a form - the script then either adds it to the database or deletes it from the database. If the user is trying to delete a product that is not in the database they get a error message. Upon successful adding/deleting they also get a message. However, when I test it - I just get a blank page. Perl doesnt come up with any warnings, syntax errors or anything - says everything is fine... but I still just get a blank page. The script... #!/usr/bin/perl #c09ex5.cgi - saves data to and removes data from a database print "Content-type: text/html\n\n"; use CGI qw(:standard); use SDBM_File; use Fcntl; use strict; #declare variables my ($code, $name, $price, $button, $codes, $names, $prices); #assign values to variables $code = param('Code'); $name = param('Name'); $price = param('Price'); $button = param('Button'); ($code, $name, $price) = format_input(); ($codes, $names, $prices) = ($code, $name, $price); if ($button eq "Save") { add(); } elsif ($button eq "Delete") { remove(); } exit; sub format_input { $codes =~ s/^ +//; $codes =~ s/ +$//; $codes =~ tr/a-z/A-Z/; $codes =~ tr/ //d; $names =~ s/^ +//; $names =~ s/ +$//; $names =~ tr/ //d; $names = uc($names); $prices =~ s/^ +//; $prices =~ s/ +$//; $prices =~ tr/ //d; $prices =~ tr/$//d; } sub add { #declare variable my %candles; #open database, format and add record, close database tie(%candles, "SDBM_File", "candlelist", O_CREAT|O_RDWR, 0666) or die "Error opening candlelist. $!, stopped"; format_vars(); $candles{$codes} = "$names,$prices"; untie(%candles); #create web page print "<HTML>\n"; print "<HEAD><TITLE>Candles Unlimited</TITLE></HEAD>\n"; print "<BODY>\n"; print "<FONT SIZE=4>Thank you, the following product has been added.<BR>\n"; print "Candle: $codes $names $prices</FONT>\n"; print "</BODY></HTML>\n"; } #end add sub remove { #declare variables my (%candles, $msg); tie(%candles, "SDBM_File", "candlelist", O_RDWR, 0) or die "Error opening candlelist. $!, stopped"; format_vars(); #determine if the product is listed if (exists($candles{$codes})) { delete($candles{$codes}); $msg = "The candle $codes $names $prices has been removed."; } else { $msg = "The product you entered is not in the database"; } #close database untie(%candles); #create web page print "<HTML>\n"; print "<HEAD><TITLE>Candles Unlimited</TITLE></HEAD>\n"; print "<BODY>\n"; print "<H1>Candles Unlimited</H1>\n"; print "$msg\n"; print "</BODY></HTML>\n"; }

    Read the article

  • ORA-12705: Cannot access NLS data files or invalid environment specified

    - by Mohit Nanda
    I am getting this error while trying to create a connection pool, on my Oracle database, Oracle 10gR2. java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-12705: Cannot access NLS data files or invalid environment specified I am able to connect to the database over sqlplus & iSQLPlus client, but when I try to connect using this Java program, I get this error just when the connection pool is to be initialised and it does not initialise the connection pool. Can someone please help me resolving it? DB Version: Oracle version 10.2.0.1 OS: RHEL 4.0 Here is a barebone, java code which is throwing this error, while connecting to my database. import java.sql.*; public class connect{ public static void main(String[] args) { Connection con = null; CallableStatement cstmt = null; String url = "jdbc:oracle:thin:@hostname:1521:oracle"; String userName = "username"; String password = "password"; try { System.out.println("Registering Driver ..."); DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver()); System.out.println("Creating Connection ..."); con = DriverManager.getConnection(url, userName, password); System.out.println("Success!"); } catch(Exception ex) { ex.printStackTrace(System.err); } finally { if(cstmt != null) try{cstmt.close();}catch(Exception _ex){} if(con != null) try{con.close();}catch(Exception _ex){} } } }

    Read the article

  • McAfee Virus Scan and Oracle RAC

    - by Lee Gathercole
    Hi, We're experiencing a strange problem with Oracle RAC and McAfee anti-virus. As part of the installation of the Oracle RAC we disable anti virus as directed. We have had our RAC running fine, but when we came to re-enable the AV and reboot we got the BSOD. Abnormal Program Termination (BugCheck, STOP: 0x00000035 (0x8E984678, 0x00000000, 0x00000000, 0x00000000 NO_MORE_IRP_STACK_LOCATIONS Following the standard process of raising this problem with Microsoft they identify the problem and also a fix. Microsoft talk about too many file filter drivers being present and pushing the DFS upper limit beyond the default size. Upping this value, as per msdn, has no impact. We're able to recover from this BSOD by disabling AV. We don't have the problem if we run the AV service manually whilst the system is up. However, if we make the service automatic we fail to boot. Tech Details 2 Node Oracle 10g Cluster 2 * Windows 2003 SP2, 16GB RAM, Quad Core 3ghz Processor SAN attached storage McAfee VirusScan Enterprise 8.5.0i, Scan Engine (5300.2777), DAT Version (5536.0000) Thanks Lee

    Read the article

  • sqlite no such table

    - by Graham B
    can anyone please help. I've seen many people with the same problem and looked at all suggestions but still cannot get this to work. I have tried to unistall the application and install again, I have tried to change the version number and start again. I've debugged the code and it does go into the onCreate function, but when I go to make a select query it says the users table does not exist. Any help would greatly be appreciated. Thanks guys DatabaseHandler Class public class DatabaseHandler extends SQLiteOpenHelper { // Variables protected static final int DATABASE_VERSION = 1; protected static final String DATABASE_NAME = "MyUser.db"; // Constructor public DatabaseHandler(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } // Creating Tables @Override public void onCreate(SQLiteDatabase db) { // Create the Users table // NOTE: I have the column variables saved above String CREATE_USERS_TABLE = "CREATE TABLE IF NOT EXISTS Users(" + KEY_PRIMARY_ID + " " + INTEGER + " " + PRIMARY_KEY + " " + AUTO_INCREMENT + " " + NOT_NULL + "," + USERS_KEY_EMAIL + " " + NVARCHAR+"(1000)" + " " + UNIQUE + " " + NOT_NULL + "," + USERS_KEY_PIN + " " + NVARCHAR+"(10)" + " " + NOT_NULL + ")"; db.execSQL(CREATE_USERS_TABLE); } // Upgrading database @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS Users"); onCreate(db); } UserDataSource class public class UserDataSource { private SQLiteDatabase db; private DatabaseHandler dbHandler; public UserDataSource(Context context) { dbHandler = new DatabaseHandler(context); } public void OpenWriteable() throws SQLException { db = dbHandler.getWritableDatabase(); } public void Close() { dbHandler.close(); } // Validate the user login with the username and password provided public void ValidateLogin(String username, String pin) throws CustomException { Cursor cursor = db.rawQuery( "select * from Users where " + DatabaseHandler.USERS_KEY_EMAIL + " = '" + username + "'" + " and " + DatabaseHandler.USERS_KEY_PIN + " = '" + pin + "'" , null); ........ } Then in the activity class, I'm calling UserDataSource uds = new UserDataSource (this); uds.OpenWriteable(); uds.ValidateLogin("name", "pin"); Any help would be great, thanks very much Graham The following is the attached log from the error report 11-23 17:47:46.414: I/SqliteDatabaseCpp(26717): sqlite returned: error code = 1, msg = no such table: Users, db=/data/data/prometric.myitemwriter/databases/MyUser.db 11-23 17:47:57.085: D/AndroidRuntime(26717): Shutting down VM 11-23 17:47:57.085: W/dalvikvm(26717): threadid=1: thread exiting with uncaught exception (group=0x40bec1f8) 11-23 17:47:57.171: D/dalvikvm(26717): GC_CONCURRENT freed 575K, 8% free 8649K/9351K, paused 2ms+6ms 11-23 17:47:57.179: E/AndroidRuntime(26717): FATAL EXCEPTION: main 11-23 17:47:57.179: E/AndroidRuntime(26717): java.lang.IllegalStateException: Could not execute method of the activity 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.view.View$1.onClick(View.java:3091) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.view.View.performClick(View.java:3558) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.view.View$PerformClick.run(View.java:14152) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.os.Handler.handleCallback(Handler.java:605) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.os.Handler.dispatchMessage(Handler.java:92) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.os.Looper.loop(Looper.java:137) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.app.ActivityThread.main(ActivityThread.java:4514) 11-23 17:47:57.179: E/AndroidRuntime(26717): at java.lang.reflect.Method.invokeNative(Native Method) 11-23 17:47:57.179: E/AndroidRuntime(26717): at java.lang.reflect.Method.invoke(Method.java:511) 11-23 17:47:57.179: E/AndroidRuntime(26717): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 11-23 17:47:57.179: E/AndroidRuntime(26717): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 11-23 17:47:57.179: E/AndroidRuntime(26717): at dalvik.system.NativeStart.main(Native Method) 11-23 17:47:57.179: E/AndroidRuntime(26717): Caused by: java.lang.reflect.InvocationTargetException 11-23 17:47:57.179: E/AndroidRuntime(26717): at java.lang.reflect.Method.invokeNative(Native Method) 11-23 17:47:57.179: E/AndroidRuntime(26717): at java.lang.reflect.Method.invoke(Method.java:511) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.view.View$1.onClick(View.java:3086) 11-23 17:47:57.179: E/AndroidRuntime(26717): ... 11 more 11-23 17:47:57.179: E/AndroidRuntime(26717): Caused by: android.database.sqlite.SQLiteException: no such table: Users: , while compiling: select * from Users where email = '' and pin = '' 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteCompiledSql.native_compile(Native Method) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteCompiledSql.<init>(SQLiteCompiledSql.java:68) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteProgram.compileSql(SQLiteProgram.java:143) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteProgram.compileAndbindAllArgs(SQLiteProgram.java:361) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:127) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:94) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:53) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:47) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1685) 11-23 17:47:57.179: E/AndroidRuntime(26717): at android.database.sqlite.SQLiteDatabase.rawQuery(SQLiteDatabase.java:1659) 11-23 17:47:57.179: E/AndroidRuntime(26717): at projectname.database.UserDataSource.ValidateLogin(UserDataSource.java:73) 11-23 17:47:57.179: E/AndroidRuntime(26717): at projectname.LoginActivity.btn_login_Click(LoginActivity.java:47) 11-23 17:47:57.179: E/AndroidRuntime(26717): ... 14 more

    Read the article

  • How to connect to Oracle 10g server from client machines

    - by Tareq
    I have installed Oracle 10g in one of my office's computer. I want to keep this as database server. I am developing a .net project which will communicate with the database server from client machine and from the server machine. I success to communicate with oracle from server machine but not from client machine using the .net project. The connection code is as follows: Public OraConn As ADODB.Connection OraConn = New ADODB.Connection OraConn.Provider = "OraOLEDB.Oracle" OraConn.ConnectionString = "Data Source=<my_database_name>;User ID=<my_user>;Password=<my_pass>;" OraConn.Open() Please tell me step by step procedures how can I connect to my server database from my .net client program resides on client machine ? Thanks in Advance.

    Read the article

  • Oracle Hibernate with in Netbean RCP

    - by jurnaltejo
    All, i have a problem with hibernate using netbean platform 6.8, i have been search around internet, but cannot found the suitable answer This is my story. i am using oracle database as data source of my hibernate entity with ojdbc14.jar driver. First i create hibernate entity tobe wrapped latter in a netbeans module, i tested the hibernate connection configuration and everything just works well. i can connect to oracle database successfuly, every hibernate query works well. Then i wrapped that hibernate entity jar as a netbeans module, create another module to warp my ojdbc14.jar then i test it. and, im using hibernate library dependency that available on netbean platform (netbean 6.8), but unfutornatelly i got oracle sql error saying “no suitable driver for [connection url]” when running the project. thats quite weird since it doesn’t happend when I test it before with out netbean platform. i thought that is related to netbeans lazy loading issue, i am not sure,. any idea ? tq for help

    Read the article

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Updating textfield in doctrine produces an exception

    - by james-murphy
    I have a textfield that contains say for example the following text:- "A traditional English dish comprising sausages in Yorkshire pudding batter, usually served with vegetables and gravy." This textfield is in a form that simply updates an item record using it's ID. If I edit part of the textfield and replace "and gravy." with "humous." So that the textfield now contains "A traditional English dish comprising sausages in Yorkshire pudding batter, usually served with vegetables and humous." I get the following exception:- Fatal error: Uncaught exception 'Doctrine_Query_Exception' with message 'Unknown component alias humous' in C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php:780 Stack trace: C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(767): Doctrine_Query_Abstract-getQueryComponent('humous') C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Set.php(58): Doctrine_Query_Abstract-getAliasDeclaration('humous') C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(2092): Doctrine_Query_Set-parse('i.details = 'A ...') C:\Projects\nitrous\lightweight\system\database\Doctrine\Query.php(1058): Doctrine_Query_Abstract-_processDqlQueryPart('set', Array) C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(971): Doctrine_Query-getSqlQuery(Array) C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(1030): Doctrine_Query_Abstract-_execute(Array) C:\Projects\nitrous\lightweight\system\appl in C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php on line 780 I'm using Doctrine 1.0.6 hooked into CodeIgniter 1.7.0 if anyone is interested. My doctrine query that actually performs the update looks as follows:- public function updateItems($id, $arrayItem) { $query = new Doctrine_Query(); $query->update('Item i'); foreach($arrayItem as $key => $value) { $query->set('i.'.$key, "'".$value."'"); } $query->where('i.id = ?', $id); return $query->execute(); } This seems bizarre because if i replace the entire string "A traditional English dish comprising sausages in Yorkshire pudding batter, usually served with vegetables and humous." with something completely different like just "test" it doesn't throw an exception and works just fine. This baffles me... is it a bug in Doctrine or have I missed something?

    Read the article

  • Why does Perl's DBI complain about "failed: ERROR OCIEnvNlsCreate" when I try to connect to Oracle 1

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple Perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before?

    Read the article

  • How do I store a string longer than 4000 characters in an Oracle Database using Java/JDBC?

    - by Ventrue
    I’m not sure how to use Java/JDBC to insert a very long string into an Oracle database. I have a String which is greater than 4000 characters, lets say it’s 6000. I want to take this string and store it in an Oracle database. The way to do this seems to be with the CLOB datatype. Okay, so I declared the column as description CLOB. Now, when it comes time to actually insert the data, I have a prepared statement pstmt. It looks like pstmt = conn.prepareStatement(“INSERT INTO Table VALUES(?)”). So I want to use the method pstmt.setClob(). However, I don’t know how to create a Clob object with my String in it; there’s no constructor (presumably because it can be potentially much larger than available memory). How do I put my String into a Clob? Keep in mind I’m not a very experienced programmer; please try to keep the explanations as simple as possible. Efficiency, good practices, etc. are not a concern here, I just want the absolute easiest solution. I’d like to avoid downloading other packages if it all possible; right now I’m just using JDK 1.4 and what is labelled “ojdbc14.jar”. I've looked around a bit but I haven't been able to follow any of the explanations I've found. If you have a solution that doesn’t use Clobs, I’d be open to that as well, but it has to be one column.

    Read the article

  • Deploying ASP.Net MVC application

    - by a_m0d
    I've recently reached the stage where an ASP.net MVC application I am developing is ready to be deployed to the production server. I've worked out how to publish the application - I've got all the files on the server, and can access them over the internet. However, I can't work out how to deploy my database. The server has the SQL Server Management Studio Express installed, as the database used is a SQL Server Express database. I have the server instance up and running - I just don't know how to add the tables, etc. to the database. I have created the "CREATE TABLE" scripts on the development machine, but as far as I can see, Management Studio does not provide any way to actually run these scripts. I have looked through all the menu items that I could see, and none of them worked. Even using the "Create new query..." option and pasting the script in didn't work. When I try "File-Open..." and select a script to run, set the correct database from the dropdown list on the toolbar, and then execute the script, it complains about not finding the database file (even when I set the USE [...] statement to the correct path. Deleting the USE [...] statement, the script complains that it can't find the [dbo].[Invoices] object; however, it shouldn't be able to find it, because its trying to create it! tl;dr: What's the best way to make sure that the database on the production machine matches the database on my development machine?

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • Enabling OUD Entry Cache for large static groups

    - by Sylvain Duloutre
    Oracle Unified Directory can take advantage of several caches to improve performances. especially the so-called database cache and the file system cache. In addition to that, it is possible to use an entry cache to cache LDAP entries. By default, the entry cache is not used. In specific deployements involving large static groups, it may worth loading the group entries to the entry cache to speed up group membership and group-based aci evaluation. To do so, run the following commands: First, specify which entries should reside in the entry cache. In the commad below, only entries matching the LDAP filter " (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) " will be stored in the entry cache. dsconfig set-entry-cache-prop \          --cache-name FIFO \          --add include-filter:\(\|\(objctclass=groupOfNames\)\(objectclass=groupOfUniqueNames\)\)          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt Then enable the entry cache: dsconfig set-entry-cache-prop \          --cache-name FIFO \          --set enabled:true \          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt In addition to that, you can control how much memory the entry cache can use: oud@s96sec1d0-v3:/application/oud : dsconfig -X -n -p <ADMIN PORT> -D "cn=Directory Manager" -w <password> get-entry-cache-prop --cache-name FIFO Property           : Value(s) -------------------:----------------------------------------------------------- cache-level        : 1 enabled            : true exclude-filter     : - include-filter     : (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) max-entries        : 2147483647 max-memory-percent : 90 You can change the max-entries amd max-memory-percent properties to control the entry cache size using the dsconfig set-entry-cache-prop command.

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • Stretch in multiple components using af:popup, af:region, af:panelTabbed

    - by Arvinder Singh
    Case study: I have a pop-up(dialogue) that contains a region(separate taskflow) showing a tab. The contents of this tab is in a region having a separate taskflow. The jsff page of this taskflow contains a panelSplitter which in turn contains a table. In short the components are : pop-up(dialogue) --> region(separate taskflow) --> tab --> region(separate taskflow) --> panelSplitter --> table At times the tab is not displayed with 100% width or the table in panelSplitter is not 100% visible or the splitter is not visible. Maintaining the stretch for all the components is difficult......not any more!!! Below is the solution that you can make use of in many similar scenarios. I am mentioning the major code snippets affecting the stretch and alignment. pop-up: <af:popup> <af:dialog id="d2" type="none" title="" inlineStyle="width:1200px"> <af:region value="#{bindings.PriceChangePopupFlow1.regionModel}" id="r1"/> </af:dialog> The above region is a jsff containing multiple tabs. I am showing code for a single tab. I kept the tab in a panelStretchLayout. <af:panelStretchLayout id="psl1" topHeight="300px" styleClass="AFStretchWidth"> <af:panelTabbed id="pt1"> <af:showDetailItem text="PO Details" id="sdi1" stretchChildren="first" > <af:region value="#{bindings.PriceChangePurchaseOrderFlow1.regionModel}" id="r1" binding="# {pageFlowScope.priceChangePopupBean.poDetailsRegion}" /> This "region" displays a .jsff containing a table in a panelSplitter. <af:panelSplitter id="ps1"  orientation="horizontal" splitterPosition="700"> <f:facet name="first"> <af:panelHeader text="PurchaseOrder" id="ph1"> <af:table id="md1" rows="#{bindings.PurchaseOrderVO.rangeSize}" That's it!!! We're done... Note the stretchChildren="first" attribute in the af:showDetailItem. That does the trick for us. Oracle docs say the following about stretchChildren :  Valid Values: none, first The stretching behavior for children. Acceptable values include: "none": does not attempt to stretch any children (the default value and the value you need to use if you have more than a single child; also the value you need to use if the child does not support being stretched) "first": stretches the first child (not to be used if you have multiple children as such usage will produce unreliable results; also not to be used if the child does not support being stretched)

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >