Search Results

Search found 17227 results on 690 pages for 'oracle hcm cloud'.

Page 436/690 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Be Prepared: Technology Trends Converge and Disrupt

    - by Richard Lefebvre
    Cloud. Big data. Mobile. Social media: these mega trends in technology have had a profound impact on our lives. And now according to SVP Ravi Puri from North America Oracle Consulting Services, these trends are starting to converge and will affect us even more. His article, “Cloud, Analytics, Mobile, And Social: Convergence Will Bring Even More Disruption” appeared in Forbes on June 6. For example, mobile and social are causing huge changes in the business world. Big data and cloud are coming together to help us with deep analytical insights. And much more. These convergences are causing another wave of disruption, which can drive all kinds of improvements in such things as customer satisfaction, competitive advantage, and growth. But, according to Puri, companies need to be prepared. In this article, Puri urges companies to get out in front of the new innovations. H3 gives good directions on how to do so to accelerate time to value and minimize risk. The post is a good thought leadership piece to pass on to your customers.

    Read the article

  • Two views of Federation: inside out, and outside in

    - by Darin Pendergraft
    IDM customers that I speak to have spent a lot of time thinking about enterprise SSO - asking your employees to log in to multiple systems, each with distinct hard to guess (translation: hard to remember) passwords that fit the corporate security policy for length and complexity is a strategy that is just begging for a lot of help-desk password reset calls. So forward thinking organizations have implemented SSO for as many systems as possible. With the mix of Enterprise Apps moving to the cloud, it makes sense to continue this SSO strategy by Federating with those cloud apps and services.  Organizations maintain control, since employee access to the externally hosted apps is provided via the enterprise account.  If the employee leaves, their access to the cloud app is terminated when their enterprise account is disabled.  The employees don't have to remember another username and password - so life is good. From the outside in - I am excited about the increasing use of Social Sign-on - or BYOI (Bring your own Identity).  The convenience of single-sign on is extended to customers/users/prospects when organizations enable access to business services using a social ID.  The last thing I want when visiting a website or blog is to create another account.  So using my Google or Twitter ID is a very nice quick way to get access without having to go through a registration process that creates another username/password that I have to try to remember. The convenience of not having to maintain multiple passwords is obvious, whether you are an employee or customer - and the security benefit of not having lots of passwords to lose or forget is there as well. Are enterprises allowing employees to use their personal (social) IDs for enterprise apps?  Not yet, but we are moving in the right direction, and we will get there some day.

    Read the article

  • cannot log in to all account except guest after mounting new partition

    - by student
    I really really need help. to do my homework from school, since two days ago, I was installing oracle 11g on my ubuntu. through that process, there was lots of problems also, but searching a lot, in somehow I could go to the next phase installing it. (because it was the first time to install on ubuntu by myself..) but when I was making password for oracle and clicked next, it says there are not enough space for home/oracle folder, So to make another space, I mounted another space after checking Gparted and using Software device management, I mounted it. And when I rebooted it, since then, I cannot log in to my own administrator account and even to oracle account(that I made for oracle 11g on ubuntu ) when I type the passwords for it, it seems to work but it redisplay the first page to log in, without really logging in it.. So I loggined to here as guest account... and even I cannot try gnome console manipulation to restore or fix it.. because it is guest account... I`m struggling for three days and... help me, how do I need to do? and at the time oracle said it needs enough space for its home folder, how I should have done?

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • How to organize SQL script files

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too long. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • "Parameter type conflict" when calling Java Stored Procedure within another Java Stored Procedure

    - by GuiPereira
    Here's the problem (sorry for the bad english): i'm working with JDeveloper and Oracle10g, and i have a Java Stored Procedure that is calling another JSP like the code: int sd = 0; try { CallableStatement clstAddRel = conn.prepareCall(" {call FC_RJS_INCLUIR_RELACAO_PRODCAT(?,?)} "); clstAddRel.registerOutParameter(1, Types.INTEGER); clstAddRel.setString(1, Integer.toString(id_produto_interno)); clstAddRel.setString(2, ac[i].toString()); clstAddRel.execute(); sd = clstAddRel.getInt(1); } catch(SQLException e) { String sqlTeste3 = "insert into ateste values (SQ_ATESTE.nextval, ?)"; PreparedStatement pstTeste3 = conn.prepareStatement(sqlTeste3); pstTeste3.setString(1,"erro: "+e.getMessage()+ ac[i]); pstTeste3.execute(); pstTeste3.close(); } I'm recording the error in a table called ATESTE because this JavaSP is a procedure and not a function, I've to manipulate DML inside. So, the error message I'm getting is: 'parameter type conflict'... the function "FC_RJS_INCLUIR_RELACAO_PRODCAT" it's a Java Stored Procedure too, it's already exported to Oracle, and returns an int variable, and i have to read this to decide which webservice i will call from this JavaSP. I have already tried the OracleTyep.NUMBER in the registerOutParameter. Anyone knows what i'm doing wrong?

    Read the article

  • Getting a ResultSet/RefCursor over a database link

    - by JonathanJ
    From the answers to http://stackoverflow.com/questions/1122175/calling-a-stored-proc-over-a-dblink it seems that it is not possible to call a stored procedure and get the ResultSet/RefCursor back if you are making the SP call across a remote DB link. We are also using Oracle 10g. We can successfully get single value results across the link, and can successfully call the SP and get the results locally but we get the same 'ORA-24338: statement handle not executed' error when reading the ResultSet from the remote DB. My question - is there any workaround to using the stored procedure? Is a shared view a better solution? Piped rows? Sample Stored Procedure: CREATE OR REPLACE PACKAGE BODY example_SP IS PROCEDURE get_terminals(p_CD_community IN community.CD_community%TYPE, p_cursor OUT SYS_REFCURSOR) IS BEGIN OPEN p_cursor FOR SELECT cd_terminal FROM terminal t, community c WHERE c.cd_community = p_CD_community AND t.id_community = c.id_community; END; END example_SP; / Sample Java code that works locally but not remotely: Connection conn = DBConnectionManagerFactory.getDBConnectionManager().getConnection(); CallableStatement cstmt = null; ResultSet rs = null; String community = "EXAMPLE"; try { cstmt = conn.prepareCall("{call example_SP.get_terminals@remote_address(?,?)}"); cstmt.setString(1, community); cstmt.registerOutParameter(2, OracleTypes.CURSOR); cstmt.execute(); rs = (ResultSet)cstmt.getObject(2); while (rs.next()) { LogUtil.getLog().logInfo("Terminal code=" + rs.getString( "cd_terminal" )); } }

    Read the article

  • How to avoid OCIError in rails application?

    - by qichunren
    OCIError (ORA-12541: TNS:no listener): oci8.c:270:in oci8lib.so /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_oci_connection.rb:223:in new' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_oci_connection.rb:223:innew_connection' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_oci_connection.rb:328:in initialize' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_oci_connection.rb:24:innew' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_oci_connection.rb:24:in initialize' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_connection.rb:9:innew' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_connection.rb:9:in create' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connection_adapters/oracle_enhanced_adapter.rb:50:inoracle_enhanced_connection' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:291:in send' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:291:inconnection=' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:259:in retrieve_connection' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:78:inconnection' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:1063:in table_exists?' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:1153:ininspect' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/core_ext/class/inheritable_attributes.rb:131:in to_proc' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/dependencies.rb:426:incollect' It seems that rails app lost oracle connection,how to avoid this in application controller: def rescue_action_in_public(exception) I use def rescue_action_in_public(exception) case exception.class.to_s when "OCIError" # my solution end It still throw me 500.html

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • How to make a GRANT persist for a table that's being dropped and re-created?

    - by Eli Courtwright
    I'm on a fairly new project where we're still modifying the design of our Oracle 11g database tables. As such, we drop and re-create our tables fairly often to make sure that our table creation scripts work as expected whenever we make a change. Our database consists of 2 schemas. One schema has some tables with INSERT triggers which cause the data to sometimes be copied into tables in our second schema. This requires us to log into the database with an admin account such as sysdba and GRANT access to the first schema to the necessary tables on the second schema, e.g. GRANT ALL ON schema_two.SomeTable TO schema_one; Our problem is that every time we make a change to our database design and want to drop and re-create our database tables, the access we GRANT-ed to schema_one went away when the table was dropped. Thus, this creates another annoying step wherein we must log in with an admin account to re-GRANT the access every time one of these tables is dropped and re-created. This isn't a huge deal, but I'd love to eliminate as many steps as possible from our development and testing procedures. Is there any way to GRANT access to a table in such a way that the GRANT-ed permissions survive a table being dropped and then re-created? And if this isn't possible, then is there a better way to go about this?

    Read the article

  • LPX-00607 for ora:contains in java but not sqlplus

    - by Windle
    Hey all, I am trying to doing some sql querys out of Oracle 11g and am having issues using ora:contains. I am using spring's jdbc impl and my code generates the sql statement: select * from view_name where column_a = ? and column_b = ? and existsNode(xmltype(clob_column), 'record/name [ora:contains(text(), "name1") 0]', 'xmlns:ora="http://xmlns.oralce.com/xdb"') = 1 I have removed the actual view / column names obviously, but when I copy that into sqlplus and substitute in random values, the select executes properly. When I try to run it in my DAO code I get this stack trace: org.springframework.jdbc.UncatergorizedSQLException: PreparedStatementCallback; uncatergorizedSQLException for SQL [the big select above]; SQL state [99999]; error code [31011]; ORA-31011: XML parsing failed. ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' ;nested exception is java.sql.SQLException: ORA-31011: XML parsing failed ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' (continues on like this for awhile....) I think it is worth mentioning that I am using maven and it is possible I am missing some dependency that is required for this. Sorry the post is so long, but I wanted to err on the side of too much info. Thanks for taking the time to read this at least =) -Windle

    Read the article

  • number of args for stored procedure PLS00306

    - by Peter Kaleta
    Hi I have problem with calling for my procedure.Oracle scrams pls00306 Error: Wrong number of types of arguments in call to procedure. With my type declaration procedure has exact the same declaration like in header below. If I run it as separate prcedure it works , when i work in ODCI interface for exensible index creation , it throws pls 00306. MEMBER PROCEDURE FILL_TREE_LVL(target_column VARCHAR2, cur_lvl NUMBER, max_lvl NUMBER, parent_rect NUMBER,start_x NUMBER, start_y NUMBER,end_x NUMBER, end_y NUMBER) IS stmt VARCHAR2(2000); rect_id NUMBER; diff_x NUMBER; diff_y NUMBER; new_start_x NUMBER; new_end_x NUMBER; i NUMBER; j NUMBER; BEGIN {...} END FILL_TREE_LVL; STATIC FUNCTION ODCIINDEXCREATE (ia SYS.ODCIINDEXINFO, parms VARCHAR2, env SYS.ODCIEnv) RETURN NUMBER IS stmt VARCHAR2(2000); stmt2 VARCHAR2(2000); min_x NUMBER; max_x NUMBER; min_y NUMBER; max_y NUMBER; lvl NUMBER; rect_id NUMBER; pt_tab VARCHAR2(50); rect_tab VARCHAR2(50); cnum NUMBER; TYPE point_rect is RECORD( point_id NUMBER, rect_id NUMBER ); TYPE point_rect_tab IS TABLE OF point_rect; pr_table point_rect_tab; BEGIN {...} FILL_TREE_LVL('any string',0,lvl,min_x,min_y,max_x, max_y); {...} END;

    Read the article

  • calling a stored proc over a dblink

    - by neesh
    I am trying to call a stored procedure over a database link. The code looks something like this: declare symbol_cursor package_name.record_cursor; symbol_record package_name.record_name; begin symbol_cursor := package_name.function_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor; When I run this from the same DB instance and schema where package_name belongs to I am able to run it fine. However, when I run this over a database link, (with the required modification to the stored proc name, etc) I get an oracle error: ORA-24338: statement handle not executed. The modified version of this code over a dblink looks like this: declare symbol_cursor package_name.record_cursor@db_link_name; symbol_record package_name.record_name@db_link_name; begin symbol_cursor := package_name.function_name@db_link_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor;

    Read the article

  • Number of args for stored procedure PLS-00306

    - by Peter Kaleta
    Hi I have problem with calling for my procedure. Oracle scrams PLS-00306 Error: Wrong number of types of arguments in call to procedure. With my type declaration procedure has exact the same declaration like in header below. If I run it as separate procedure it works, when i work in ODCI interface for extensible index creation, it throws PLS-00306. MEMBER PROCEDURE FILL_TREE_LVL (target_column VARCHAR2, cur_lvl NUMBER, max_lvl NUMBER, parent_rect NUMBER,start_x NUMBER, start_y NUMBER, end_x NUMBER, end_y NUMBER) IS stmt VARCHAR2(2000); rect_id NUMBER; diff_x NUMBER; diff_y NUMBER; new_start_x NUMBER; new_end_x NUMBER; i NUMBER; j NUMBER; BEGIN {...} END FILL_TREE_LVL; STATIC FUNCTION ODCIINDEXCREATE (ia SYS.ODCIINDEXINFO, parms VARCHAR2, env SYS.ODCIEnv) RETURN NUMBER IS stmt VARCHAR2(2000); stmt2 VARCHAR2(2000); min_x NUMBER; max_x NUMBER; min_y NUMBER; max_y NUMBER; lvl NUMBER; rect_id NUMBER; pt_tab VARCHAR2(50); rect_tab VARCHAR2(50); cnum NUMBER; TYPE point_rect is RECORD( point_id NUMBER, rect_id NUMBER ); TYPE point_rect_tab IS TABLE OF point_rect; pr_table point_rect_tab; BEGIN {...} FILL_TREE_LVL('any string', 0, lvl,0, min_x, min_y, max_x, max_y); {...} END;

    Read the article

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • User preferences using SQL and JavaScript

    - by Shyam
    Hi, I am using Server Side JavaScript - yes, I am actually using Server Side JavaScript. To complexify things even more, I use Oracle as a backend database (10g). With some crazy XSLT and mutant-like HTML generation, I can build really fancy web forms - yes, I am aware of Rails and other likewise frameworks and I choose the path of horror instead. I have no JQuery or other fancy framework at my disposal, just plain ol' JavaScript that should be supported by the underlying engine called Mozilla Rhino. Yes, it is insane and I love it. So, I have a bunch of tables at my disposal and some of them are filled with associative keys that link to values. As I am a people pleaser, I want to add some nifty user-preference driven solutions. My users have all an unique user_id and this user_id is available during the entire session. My initial idea is to have a user preference table, where I have "three" columns: user_id, feature and pref_string. Using a delimiter, such as : or - (haven't thought about a suitable one yet), I could like store a bunch of preferences as a list and store its elements inside an array using the .split-method (similar like the PHP-explode function). The feature column could be like the table name or some identifier for the "feature" i want to link preferences too. I hate hardcoding objects, especially as I want to be able to back these up and reuse this functionality application-wide. Of course I would love better ideas, just keep in mind I cannot just add a library that easily. These preferences could be like "joined" to the table, so I can query it and use its values. I hope it doesn't sounds too complex, because well.. its basically something really simple I need. Thanks!

    Read the article

  • Global Temporary Table "On commit delete rows" functionality discrepancy.

    - by TomatoSandwich
    I have a global temporary table. I called my GTT, for that was it's initials. My GTT never hurt anyone, and did everything I bade of it. I asked my GTT to delete rows on commit. This is a valid function in the creation script of my GTT in oracle. I wanted to be able to have different users see GTT with their own data, and not the data of other people's sessions. 'Delete rows on commit' worked perfectly in our test environment. GTT and I were happy. But then, I deployed GTT as part of an update to functionality to a client's database. The database doesn't like to play well with GTT. GTT called me up all upset and worried, because it wasn't holding any data any more, and didn't know why. GTT told me that if someone did: insert into my_GTT (description) values ('Happy happy joy joy') he would sing-song back: 1 row inserted. However, if the same person tried select * from my_GTT; GTT didn't know what to do, and he replied 0 rows returned. GTT was upset that he didn't know what his playmate had inserted. Please, Stackoverflow, why would GTT forget what was placed into him? He can remember perfectly well at home, but out in the cold harsh world, he just gets so scared. :(

    Read the article

  • The instruction at "0x7c910a19" referenced memory at "oxffffffff". The memory could not be "read"

    - by ClareBear
    Hello guys/girls The instruction at "0x7c910a19" referenced memory at "oxffffffff". The memory could not be "read" I have a small issue, I receive the error above before the .vbs terminates. I don't know why this error is thrown. Below is the process of the .vbs file: Call ImportTransactions() Call UpdateTransactions() Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") strOracle = "SELECT query here from Oracle database" objCommand.CommandText = strOracle objCommand.CommandType = 1 objCommand.CommandTimeout = 0 Set objCommand.ActiveConnection = objConnection objRecordset.cursorType = 0 objRecordset.cursorlocation = 3 objRecordset.Open objCommand, , 1, 3 If objRecordset.EOF = False Then Do Until objRecordset.EOF = True strSQL = "INSERT query here into SQL database" strSQL = Query(strSQL) Call RunSQL(strSQL, objRecordsetInsert, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) objRecordset.MoveNext Loop End If objRecordset.Close() Set objRecordset = Nothing Set objRecordsetInsert = Nothing End Function Function UpdateTransactions() Dim strSQLUpdateVAT, strSQLUpdateCodes Dim objRecordsetVAT, objRecordsetUpdateCodes strSQLUpdateVAT = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1)" Call RunSQL(strSQLUpdateVAT, objRecordsetVAT, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) strSQLUpdateCodes = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1) different WHERE clause" Call RunSQL(strSQLUpdateCodes, objRecordsetUpdateCodes, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) Set objRecordsetVAT = Nothing Set objRecordsetUpdateCodes = Nothing End Function UDPATE: If I exit the function after I open the connection (see below) it still causes the same error. Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") Exit Function End Function It does both the import and update and seems to throw this error after. Thanks in advance for any help, Clare

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • JDBC transaction dead-lock solution required?

    - by user49767
    It's a scenario described my friend and challenged to find solution. He is using Oracle database and JDBC connection with read committed as transaction isolation level. In one of the transaction, he updates a record and executes selects statement and commits the transaction. when everything happening within single thread, things are fine. But when multiple requests are handled, dead-lock happens. Thread-A updates a record. Thread B updates another record. Thread-A issues select statement and waits for Thread-B's transaction to complete the commit operation. Thread-B issues select statement and waits for Thread-A's transaction to complete the commit operation. Now above causes dead-lock. Since they use command pattern, the base framework allows to issue commit only once (at the end of all the db operation), so they are unable to issue commit immediately after select statement. My argument was Thread-A supposed to select all the records which are committed and hence should not be issue. But he said that Thread-A will surely wait till Thread-B commits the record. is that true? What are all the ways, to avoid the above issue? is it possible to change isolation-level? (without changing underlying java framework) Little information about base framework, it is something similar to Struts action, their each and every request handled by one action, transaction begins before execution and commits after execution.

    Read the article

  • asp code for upload data

    - by vicky
    hello everyone i have this code for uploading an excel file and save the data into database.I m not able to write the code for database entry. someone please help <% if (Request("FileName") <> "") Then Dim objUpload, lngLoop Response.Write(server.MapPath(".")) If Request.TotalBytes > 0 Then Set objUpload = New vbsUpload For lngLoop = 0 to objUpload.Files.Count - 1 'If accessing this page annonymously, 'the internet guest account must have 'write permission to the path below. objUpload.Files.Item(lngLoop).Save "D:\PrismUpdated\prism_latest\Prism\uploadxl\" Response.Write "File Uploaded" Next Dim FSYSObj, folderObj, process_folder process_folder = server.MapPath(".") & "\uploadxl" set FSYSObj = server.CreateObject("Scripting.FileSystemObject") set folderObj = FSYSObj.GetFolder(process_folder) set filCollection = folderObj.Files Dim SQLStr SQLStr = "INSERT ALL INTO TABLENAME " for each file in filCollection file_name = file.name path = folderObj & "\" & file_name Set objExcel_chk = CreateObject("Excel.Application") Set ws1 = objExcel_chk.Workbooks.Open(path).Sheets(1) row_cnt = 1 'for row_cnt = 6 to 7 ' if ws1.Cells(row_cnt,col_cnt).Value <> "" then ' col = col_cnt ' end if 'next While (ws1.Cells(row_cnt, 1).Value <> "") for col_cnt = 1 to 10 SQLStr = SQLStr & "VALUES('" & ws1.Cells(row_cnt, 1).Value & "')" next row_cnt = row_cnt + 1 WEnd 'objExcel_chk.Quit objExcel_chk.Workbooks.Close() set ws1 = nothing objExcel_chk.Quit Response.Write(SQLStr) 'set filobj = FSYSObj.GetFile (sub_fol_path & "\" & file_name) 'filobj.Delete next End if End If plz tell me how to save the following excel data to the oracle databse.any help would be appreciated

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >