Search Results

Search found 35607 results on 1425 pages for 'oracle information'.

Page 463/1425 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Java: how to avoid circual references when dumping object information with reflection?

    - by Tom
    I've modified an object dumping method to avoid circual references causing a StackOverflow error. This is what I ended up with: //returns all fields of the given object in a string public static String dumpFields(Object o, int callCount, ArrayList excludeList) { //add this object to the exclude list to avoid circual references in the future if (excludeList == null) excludeList = new ArrayList(); excludeList.add(o); callCount++; StringBuffer tabs = new StringBuffer(); for (int k = 0; k < callCount; k++) { tabs.append("\t"); } StringBuffer buffer = new StringBuffer(); Class oClass = o.getClass(); if (oClass.isArray()) { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("["); for (int i = 0; i < Array.getLength(o); i++) { if (i < 0) buffer.append(","); Object value = Array.get(o, i); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } buffer.append(tabs.toString()); buffer.append("]\n"); } else { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("{\n"); while (oClass != null) { Field[] fields = oClass.getDeclaredFields(); for (int i = 0; i < fields.length; i++) { if (fields[i] == null) continue; buffer.append(tabs.toString()); fields[i].setAccessible(true); buffer.append(fields[i].getName()); buffer.append("="); try { Object value = fields[i].get(o); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if ((value.getClass().isPrimitive()) || (value.getClass() == java.lang.Long.class) || (value.getClass() == java.lang.String.class) || (value.getClass() == java.lang.Integer.class) || (value.getClass() == java.lang.Boolean.class)) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } catch (IllegalAccessException e) { System.out.println("IllegalAccessException: " + e.getMessage()); } buffer.append("\n"); } oClass = oClass.getSuperclass(); } buffer.append(tabs.toString()); buffer.append("}\n"); } return buffer.toString(); } The method is initially called like this: System.out.println(dumpFields(obj, 0, null); So, basically I added an excludeList which contains all the previousely checked objects. Now, if an object contains another object and that object links back to the original object, it should not follow that object further down the chain. However, my logic seems to have a flaw as I still get stuck in an infinite loop. Does anyone know why this is happening?

    Read the article

  • Sum of distinc rows after a 1-many table join

    - by Lock
    I have 2 tables that I am joining. Table 1 has 1-many relationship with table 2. That is, table 2 can return multiple rows for a single row of table 1. Because of this, the records of table 1 is duplicated for as many rows as are on table 2.. which is expected. Now, I have a sum on one of the columns from table 1, but because of the multiple rows that get returned on the join, the sum is obviously multiplying. Is there a way to get this number back to its original number? I tried dividing by the count of rows from table 2 but this didnt quite give me the expected result. Are there any analytical functions that could do this? I almost want something like "if this row has not yet been counted in the sum, add it to the sum"

    Read the article

  • how to know on which column,the sequence is applied?

    - by Vineet
    I have to fetch all sequences with their table name along with the column name on which sequence is applied .Some how i managed to fetch table name corresponding to sequence because in my data base sequence is stored with first name as table name from data dictionary(all_sequences and all_tables) . Please let me know how to fetch corresponding column name also if possible!!

    Read the article

  • Reporting Services URL parameter problems

    - by GxG
    I have an URL to a location on the server where it can find teh report. The report works just fine if i manually refresh it. I tried using rc:ClearSession=TRUE and i also tried sending a random parameter, but the report is still not being refreshed. Any ideas? The main scenario: User eneters the page(with a grid view) User clicks on Export User sees the Report User deletes an entry from the page - grid view User clicks on Export again User sees the exact same report P.S. : The report query returns the data that should be displayed but the report returns the previous data.

    Read the article

  • ora-30926 error

    - by user1331181
    I am trying to execute the following merge statement but is is showing me ora-30926 error merge into test_output target_table USING (SELECT c.test_code, c.v_report_id, upper_score, CASE WHEN c.test_code = 1 THEN b.mean_diff WHEN c.test_code = 2 THEN b.norm_dist WHEN c.test_code = 3 THEN b.ks_stats WHEN c.test_code = 4 THEN b.ginni WHEN c.test_code = 5 THEN b.auroc WHEN c.test_code = 6 THEN b.info_stats WHEN c.test_code = 7 THEN b.kl_stats END val1 FROM combined_approach b inner join test_output c on b.v_report_id = c.v_report_id and c.upper_score = b.band_code WHERE c.v_report_id = lv_report_id ORDER BY c.test_code) source_table on(target_table.v_report_id = source_table.v_report_id and target_table.v_report_id = lv_report_id) when matched then update SET target_table.upper_value = source_table.val1;

    Read the article

  • How to filter rows on a complex filter

    - by dan
    I have these rows in a table ID Name Price Delivery == ==== ===== ======== 1 apple 1 1 2 apple 3 2 3 apple 6 3 4 apple 9 4 5 orange 4 6 6 orange 5 7 I want to have the price at the third delivery (Delivery=3) or the last price if there's no third delivery. It would give me this : ID Name Price Delivery == ==== ===== ======== 3 apple 6 3 6 orange 5 7 I don't necessary want a full solution but an idea of what to look for would be greatly appreciated.

    Read the article

  • ORA-00937 error during select subquery

    - by RedRaven
    I am attempting to write a query that returns the the number of employees, the average salary, and the number of employees paid below the average. The query I have so far is: select trunc(avg(salary)) "Average Pay", count(salary) "Total Employees", ( select count(salary) from employees where salary < (select avg(salary) from employees) ) UnderPaid from employees; But when I run this I get the ora-00937 error in the subquery. I had thought that maybe the "count" function is what is causing the issue, but even running a simpler sub query such as: select trunc(avg(salary)) "Average Pay", count(salary) "Total Employees", ( select avg(salary) from employees ) UnderPaid from employees; still returns the same error. As both AVG and COUNT seem to be aggregate functions, I'm not sure why I'm getting the error? Thanks

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • mysql stored routine vs. mysql-alternative?

    - by user522962
    We are using a mysql database w/ about 150,000 records (names) total. Our searches on the 'names' field is done through an autocomplete function in php. We have the table indexed but still feel that the searching is a bit sluggish (a few full seconds vs. something like Google Finance w/ near-instant response). We came up w/ 2 possibilities, but wanted to get more insight: Can we create a bunch (many thousands or more) of stored procedures to speed up searches, or will creating that many stored procedures bog-down the db? Is there a faster alternative to mysql for "select" statements (speed on inserting & updating rows isn't too important so we can sacrifice that, if necessary). I've vaguely heard of BigTable & others that don't support JOIN statements....we need JOIN statements for some of our other queries we do. thx

    Read the article

  • Database: Pipelined Functions

    - by Rachel
    I am new to the concept of Pipeline Functions. I have some questions regarding From Database point of view: What actually is Pipeline function ? What is the advantage of using Pipeline Function ? What challenges are solved using Pipeline Function ? Are the any optimization advantages of using Pipeline Function ? Thanks.

    Read the article

  • Performance: Subquerry or Joining

    - by Auro
    HelloHello I got a little Question about Performance of a Subquerry /Joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • how to remove all text nodes and only preserve structure information of a html page with nokogiri

    - by user58948
    I want to remove all text from html page that I load with nokogiri. For example, if a page has the following: <body><script>var x = 10;</script><div>Hello</div><div><h1>Hi</h1></div></body> I want to process it with Nokogiri and return html like the following after stripping the text like so: <body><script>var x = 10;</script><div></div><div><h1></h1></div></body> (THat is, remove the actual h1 text, text between divs, text in p elements etc, but keep the tags. also, dont remove text in the script tags.) How can I do that?

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Can I stop SQL*Plus from displaying "connected" when I have a "connect" in a script?

    - by René Nyffenegger
    I have a few sql scripts that I need to run via SQL*Plus. These scripts connect several times as different users with a connect user_01/pass_01@db_01. Now, each time the script does such a connect, it confirms the successful connection with a connected. This is distracting and I want to turn it off. I can achieve what I want with a set termout off connect user_01/pass_01@db_01 set termout on Is there a more elegant solution to my problem? Note, it doesn't help to permanently set termout off at the start of the script since I need to know if a command didn't run successfully.

    Read the article

  • When SET SCAN ON used after END throws error

    - by Karthik
    Hi, Im trying to use SET SCAN ON after as follows.. SET SCAN OFF; DECLARE -- declared a variable BEGIN --update statement END; SET SCAN ON; The use of SET SCAN ON; is causing the error when i try to run the script. The error captured ORA-06550: line 16, column 1: PLS-00103: Encountered the symbol "SET" 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action:

    Read the article

  • How can I declare a pointer with filled information in C++?

    - by chacham15
    typedef struct Pair_s { char *first; char *second; } Pair; Pair pairs[] = { {"foo", "bar"}, //this is fine {"bar", "baz"} }; typedef struct PairOfPairs_s { Pair *first; Pair *second; } PairOfPairs; PairOfPairs pops[] = { {{"foo", "bar"}, {"bar", "baz"}}, //How can i create an equivalent of this NEATLY {&pairs[0], &pairs[1]} //this is not considered neat (imagine trying to read a list of 30 of these) }; How can I achieve the above style declaration semantics?

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

  • Getting information about where c++ exceptions are thrown inside of catch block?

    - by tfinniga
    I've got a c++ app that wraps large parts of code in try blocks. When I catch exceptions I can return the user to a stable state, which is nice. But I'm not longer receiving crash dumps. I'd really like to figure out where in the code the exception is taking place, so I can log it and fix it. Being able to get a dump without halting the application would be ideal, but I'm not sure that's possible. Is there some way I can figure out where the exception was thrown from within the catch block? If it's useful, I'm using native msvc++ on windows xp and higher. My plan is to simply log the crashes to a file on the various users' machines, and then upload the crashlogs once they get to a certain size.

    Read the article

  • “Function” calling inside store procedure

    - by idimba
    Hi, I have a big store procedure, that contains a lot of INSERTs. There're many INSERTS that almost identical - they're different by some parameter(s) (all INSERTs to the same table) Is there a way to create a function/method, to which I'll pass the above parameter(s) and the function/method will generate concrete INSERT's? Thanks

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >