Search Results

Search found 74621 results on 2985 pages for 'oracle platform migration data migration'.

Page 511/2985 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • ExtJs store, how to load data using a MemoryProxy

    - by Miau
    Hi there I m trying to load a json store using a MemoryProxy ( i need to use a proxy because I use different sources depending on scenarios) it kinda looks like this var data = Ext.decode(gridArrayData); var proxy = new Ext.data.MemoryProxy(data); var store = new Ext.data.GroupingStore({ proxy: proxy }); however when I inspect this I can see that the proxy has 10 rows of data, but not the store I m lost as to why Any pointers?

    Read the article

  • JQuery getJSON Callback Returning Null Data

    - by user338828
    I have a getJSON call that is called back correctly, but the data variable is null. The python code posted below is executed by the getJSON call to the demandURL. Any ideas? javascript: var demandURL = "/demand/washington/"; $.getJSON(demandURL, function(data) { console.log(data); }); python: data = {"demand_count":"one"} json = simplejson.dumps(data) return HttpResponse(json, mimetype="application/json")

    Read the article

  • Devart Oracle Cross Apply Exception

    - by Murilo Amaru Gomes
    I´m running a problem where in one machine the code works and another don´t. Apparently we´re using the same Devart dotConnect for Oracle version (6.80.325.0). The problem is when we have a subquery in the LINQ and we get Cross Apply Not Supported for Oracle. public IQueryable<GE_MENUAPLICACAO> RetornaMenusNegadosParaUsuario2(int seqUsuario, int nroEmpresa) { return from usuarioPerm in entidadesConsinco.GE_USUARIOPERMISSAO from menu in usuarioPerm.GE_ITENSAPP.GE_APLICACAO.GE_MENUAPLICACAOs select menu; } I read a lot about it, and about subqueries, but I really can´t understand why it´s OK in some machines and not OK and another. Did I missed some fix in the installation? Thanks.

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • 3 tier with ASP.NET, C# and Oracle 11g

    - by ephraim
    Hi I am new to .NET, want to follow N-tier approach using Visual Studio 2010(ASP.NET MVC + C#),IIS and Oracle 11g(installed on the remote linux machine). I need to have Presentation, Business Logic, Data Access and Data Tiers. Can anyone give me site/example application with the following fuctionality: insert, retrieve, delete and update/modify data in the database. A step by step guide will be highly appreciated. Thank you in advance

    Read the article

  • How do I gain Control of a row in Tabular Layout in Oracle

    - by DotNetDan
    This might be simple but I am new to Oracle. I am using Oracle 10g and have a form that lists our information from a linked table in a tabular Layout. The last column of data is a "list Item" item type that has the Element list of Enabled (T) and Disabled (F). What I need is when a user changes this dropdown, to disabled, I want ONLY that row to have some of the columns be disabled and not the entire column. This is also assuming on load of the form, it will disable and enable rows of data depending on what values are being pulled from the EnabledDisabled column in the database. Thanks for the help!

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • How can I convert data encoded in WE8MSWIN1252 to utf8 for use in Python scripts?

    - by James Dean
    This data comes from an Oracle database and is extracted to flatfiles in encoding 'WE8MSWIN1252'. I want to parse the data and do some analysis. I want to see the text fields but do not need to publish the results to any other system so if some characters do not get converted perfectly I do not have a problem with that. I just do not want my parsing to fail with a decode error which is what I get if I use: inputFile = codecs.open( dataFileName, "r", "utf-8'")

    Read the article

  • Variable declared with variable keyword in sql plus(oracle 9i)?

    - by Vineet
    I am trying to declare g_num ,number data type with size it gives an error but in case of varchar2,char it does not. variable g_name varchar2(5);//correct accept size for varchar 2 variable g_num number(23);//Gives an error " VAR[IABLE] [ <variable> [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) | VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) | NVARCHAR2 (n) | CLOB | NCLOB | REFCURSOR ] ]" Please suggest!

    Read the article

  • Winforms for Mono on Mac, Linux and PC (Redux)

    - by yar
    (I asked this question in another way, and got some interesting responses but I'm not too convinced.) Is Mono's GtkSharp truly cross-platform? It seems to be Gnome based... how can that work with PC and Mac? Can someone give me examples of a working Mac/PC/Linux app that is written with a single codebase in Microsoft .Net?

    Read the article

  • Visual Studio 2010 and WinCE 5.0

    - by koloko
    Is it possible to use a platform builder 5.0 SDK in visual studio 2010 for a C++ project. I want to compile code for a specific ARM WinCE 5.0 environment and I have VS2010 at the moment. The Microsoft website recommends visual studio 2005. I'm currently downloading the VS2005 evaluation but I'm also a bit worried about installing this on a machine that already has vs2010 installed. Any advise would be greatly received.

    Read the article

  • Game engine with python scripting?

    - by Kayle
    Looking to put together a 3D side-scrolling action platformer. Since this is my first time trying to put together a non-simple adventure game, I'm at a loss for which engine to consider. I would prefer one that supports scripting in python, since that's my primary language. Without tight controls, the game will suck... so speed is a priority. Cross-platform is also important to me. Any suggestions?

    Read the article

  • Beginner problem with posting data table to JsonResult

    - by ognjenb
    With this script I get data from JsonResult (GetDevicesTable) and put them to table ( id="OrderDevices") <script type="text/javascript"> $(document).ready(function() { $("#getDevices a").click(function() { var Id = $(this).attr("rel"); var rowToMove = $(this).closest("tr"); $.post("/ImportXML/GetDevicesTable", { Id: Id }, function(data) { if (data.success) { //remove row rowToMove.fadeOut(function() { $(this).remove() }); //add row to other table $("#OrderDevices").append("<tr><td>"+ data.DeviceId+ "</td><td>" + data.Id+ "</td><td>" + data.OrderId + "</td></tr>"); } else { alert(data.ErrorMsg); } }, "json"); }); }); <% using (Html.BeginForm()) {%> <table id="OrderDevices" class="data-table"> <tr> <th> DeviceId </th> <th> Id </th> <th> OrderId </th> </tr> </table> <p> <input type="submit" value="Submit" /> </p> <% } %> When click on Submit I need something like this: $(document).ready(function() { $("#submit").click(function() { var Id = $(this).attr("rel"); var DeviceId = $(this).attr(???); var OrderId = $(this).attr(???); var rowToMove = $(this).closest("tr"); $.post("/ImportXML/DevicesActions", { Id: Id, DeviceId:DeviceId, OrderId:OrderId }, function(data) { }, "json"); }); }); I have problem with this script because do not know how post data to this JsonResult: public JsonResult DevicesActions(int Id,int DeviceId,int OrderId) { orderdevice ord = new orderdevice(); ord.Id = Id; ord.OrderId = DeviceId; ord.DeviceId = OrderId; DBEntities.AddToorderdevice(ord); DBEntities.SaveChanges(); }

    Read the article

  • How can I integrate Java with .Net?

    - by Luke
    I have one SDK that is available in Java and another SDK that is available for .Net and would like to write a single application that interfaces with both of them. I imagine I will need to use a cross platform communication framework that can support named pipes (or other in memory communication), what is the best choice? After some more research I found Hessian -- does anyone know anything about the maturity of this project?

    Read the article

  • Is there an automatic way to generate a rollback script when inserting data with LINQ2SQL?

    - by Jedidja
    Let's assume we have a bunch of LINQ2SQL InsertOnSubmit statements against a given DataContext. If the SubmitChanges call is successful, is there any way to automatically generate a list of SQL commands (or even LINQ2SQL statements) that would automatically undo everything that was submitted? It's like executing a rollback even though everything worked as expected. Note: The destination database will either be Oracle or SQL Server, so if there is specific functionality for both databases that will achieve this, I'm happy to use that as well.

    Read the article

  • Oracle - correlated subquery problems

    - by FrustratedWithFormsDesigner
    I have this query: select acc_num from (select distinct ac_outer.acc_num, ac_outer.owner from ac_tab ac_outer where (ac_outer.owner = '1234567') and ac_outer.owner = (select sq.owner from (select a1.owner from ac_tab a1 where a1.acc_num = ac_outer.acc_num order by a1.a_date desc, a1.b_date desc, a1.c_date desc) sq where rownum = 1) order by dbms_random.value()) subq order by acc_num; The idea is to get all acc_nums (not a primary key) from ac_tab, that have an owner of 1234567. Since an acc_num in ac_tab could have changed owners over time, I am trying to use the inner correlated subqueries to ensure that an acc_num is returned ONLY if it's most recent owner is 12345678. Naturally, it doesn't work (or I wouldn't be posting here ;) ) Oracle gives me an error: ORA-000904 ac_outer.acc_num is an invalid identifier. I thought that ac_outer should be visible to the correlated subqueries, but for some reason it's not. Is there a way to fix the query, or do I have to resort to PL/SQL to solve this? (Oracle verison is 10g)

    Read the article

  • how to check if internal storage file has any data

    - by user3720291
    public class Save extends Activity { int levels = 2; int data_block = 1024; //char[] data = new char[] {'0', '0'}; String blankval = "0"; String targetval = "0"; String temp; String tempwrite; String string = "null"; TextView tex1; TextView tex2; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.save); Intent intent = getIntent(); Bundle b = intent.getExtras(); tex1 = (TextView) findViewById(R.id.textView1); tex2 = (TextView) findViewById(R.id.textView2); if(b!=null) { string =(String) b.get("string"); } loadprev(); save(); } public void save() { if (string.equals("Blank")) blankval = "1"; if (string.equals("Target")) targetval = "1"; temp = blankval + targetval; try { FileOutputStream fos = openFileOutput("data.gds", MODE_PRIVATE); fos.write(temp.getBytes()); fos.close(); } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} tex1.setText(blankval); tex2.setText(targetval); } public void loadprev() { String final_data = ""; try { FileInputStream fis = openFileInput("data.gds"); InputStreamReader isr = new InputStreamReader(fis); char[] data = new char[data_block]; int size; while((size = isr.read(data))>0) { String read_data = String.copyValueOf(data, 0, size); final_data += read_data; data = new char[data_block]; } } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} char[] tempread = final_data.toCharArray();; blankval = "" + tempread[0]; targetval = "" + tempread[1]; } } After much tinkering i have finally managed to get my save/load function to work, but it does have an error, pretty much i got it to work then i did a fresh reintall deleting data.gds, afterwards the save/load function crashes because the data.gds file has no previous values. can i use a if statment to check if data.gds has any values in it, if so how do i do it and if not, then what could i use instead?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • How do I write a Oracle SQl query for this tricky question...

    - by atrueguy
    Here is the table data with the column name as Ships. +--------------+ Ships | +--------------+ Duke of north | ---------------+ Prince of Wales| ---------------+ Baltic | ---------------+ In the Outcomes table, transform names of the ships containing more than one space, as follows: Replace all characters between the first and the last spaces (excluding these spaces) by symbols of an asterisk (*). The number of asterisks must be equal to number

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >