Search Results

Search found 32453 results on 1299 pages for 'osr wls oracle service re'.

Page 571/1299 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • Are Conditional subquery

    - by Tobias Schulte
    I have a table foo and a table bar, where each foo might have a bar (and a bar might belong to multiple foos). Now I need to select all foos with a bar. My sql looks like this SELECT * FROM foo f WHERE [...] AND ($param IS NULL OR (SELECT ((COUNT(*))>0) FROM bar b WHERE f.bar = b.id)) with $param being replaced at runtime. The question is: Will the subquery be executed even if param is null, or will the dbms optimize the subquery out?

    Read the article

  • CXF code first service, WSDL generation; soap:address changes?

    - by jcalvert
    I have a simple Java interface/implementation I am exposing via CXF. I have a jaxws element in my Spring configuration file like this: <jaxws:endpoint id="managementServiceJaxws" implementor="#managementService" address="/jaxws/ManagementService" > </jaxws:endpoint> It generates the WSDL from my annotated interface and exposes the service. Then when I hit http://myhostname/cxf/jaxws/ManagementService?wsdl I get a lovely WSDL. At the bottom in the wsdl:service element, I'll see <soap:address location="http://myhostname/cxf/jaxws/ManagementService"/> However, some time a day or so later, with no application restart, hitting that same url produces: This causes a number of problems, but what I really want is to fix it. Right now, there's a particular client to the webservice that sets the endpoint to localhost; because it runs on the same machine. Is it possible the wsdl is getting regenerated and cached and then exposing the 'localhost' version? In part I don't know the exact mechanism by which one goes from a ?wsdl request in CXF to the response. It seems almost certain that it's retrieving some cached version, given that it's supposed to be determining the address by asking the servletcontainer (Jetty). For reference I know a stopgap solution is using the hostname on the client and making sure an alias in place so that it goes over the loopback. EDIT: For reference, I confirmed that if I bring my application up and first hit it over localhost, then querying for the wsdl via the hostname shows the address as localhost. Conversely, first hitting it over the hostname causes localhost requests to show the hostname. So obviously something is getting cached here.

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • Persist data when the table was not mapped (JPA EclipseLink)

    - by enrique
    Hi everybody I need some help in persisting data into a table that has not been mapped... The issue is that the database we have has a table which all of its columns are foreign keys so by mapping the whole database all of the tables are correctly mapped. However that table called "category" is not mapped. The way in which we can browse the data is by passing for the table I mentioned using the @jointable annotation which was set by the system in the other tables with which "category" has a relation. So we can go ahead and using the collections and perform a query. But the issue comes when I want to persist data into that table because there´s no any entity. We tried to persist through the collections but no luck. Then I have just tried by creating the entity with its PK and Facade all by hand. However when I try to persist using the Merge method the system tries to perform an Insert when it is supposed to perform an Update. So obviously it returns an error. Does anybody have an idea on this situation? Thanks.-

    Read the article

  • Refactoring PL/SQL triggers - extract procedures

    - by Juraj
    Hello, we have application where database contains large parts of business logic in triggers, with a update subsequently firing triggers on several other tables. I want to refactor the mess and wanted to start by extracting procedures from triggers, but can't find any reliable tool to do this. Using "Extract procedure" in both SQL Developer and Toad failed to properly handle :new and :old trigger variables. If you had similar problem with triggers, did you find a way around it? EDIT: Ideally, only columns that are referenced by extracted code would be sent as in/out parameters, like: Example of original code to be extracted from trigger: ..... if :new.col1 = some_var then :new.col1 := :old.col1 end if ..... would become : procedure proc(in old_col1 varchar2, in out new_col1 varchar2, some_var varchar2) is begin if new_col1 = some_var then new_col1 := old_col1 end if; end; ...... proc(:old.col1,:new.col1, some_var);

    Read the article

  • 600 tables in DDBB

    - by Michael
    Hi all, I'm a very young software architect. Now I'm working in a very large and I have to lead a group of developers to rewrite all the mortgage system of the bank. I'm looking at database tables and I realize that there is no any data model, neither documentation. The worst part is that there are about 1000 tables in dev environment, and like 600 in production. I trust more the production environment, but anyway, what can I do? I mean, I can suicide me or something, but is there any good reverse engineering tool, so at least I could get the schema definition with the relations between tables and comments extracted from the fields? Can you advice me something? Thanks in advance.

    Read the article

  • Fill in missing values in a SELECT statement

    - by benjamin button
    If i have a table with two fields.customer id and order. let's say i have in total order ID 1,2,3,4 all the customer can have all the four orders.like below 1234 1 1234 2 1234 3 1234 4 3245 3 3245 4 5436 2 5436 4 you can see above that 3245 customer doesnt have order id 1 and 2. how could i print in the query output like 3245 1 3245 2 5436 1 5436 3 EDIT: i dont have order table but i have list of order's like we can hard code it in the query(1,2,3,4) i dont have an orders table.

    Read the article

  • drop-down combo box-like functionality for queries.

    - by Frank Developer
    Is it possible to provide the following type of fuctionality with informix client tools? As the user types the first two characters of a name, the drop-down list is empty. At the third character, the list fills with just the names beginning with those three characters. At the fourth character, MS-Access completes the first matching name (assuming the combo's AutoExpand is on). Once enough characters are typed to identify the customer, the user tabs to the next field. The time taken to load the combo between keystrokes is minimal. This occurs once only for each entry, unless the user backspaces through the first three characters again. If your list still contains too many records, you can reduce them by another order of magnitude by changing the value of constant conSuburbMin from 3 to 4.

    Read the article

  • Counting a cell up per Objects

    - by Auro
    hey i got a problem once again :D a little info first: im trying to copy data from one table to an other table(structure is the same). now one cell needs to be incremented, beginns per group at 1 (just like a histroy). i have this table: create table My_Test/My_Test2 ( my_Id Number(8,0), my_Num Number(6,0), my_Data Varchar2(100)); (my_Id, my_Num is a nested PK) if i want to insert a new row, i need to check if the value in my_id already exists. if this is true then i need to use the next my_Num for this Id. i have this in my Table: My_Id My_Num My_Data 1 1 'test1' 1 2 'test2' 2 1 'test3' if i add now a row for my_Id 1, the row would look like this: i have this in my Table: My_Id My_Num My_Data 1 3 'test4' this sounds pretty easy ,now i need to make it in a SQL and on SQL Server i had the same problem and i used this: Insert Into My_Test (My_Id,My_Num,My_Data) SELECT my_Id, ( SELECT CASE ( CASE MAX(a.my_Num) WHEN NULL THEN 0 Else Max(A.My_Num) END) + b.My_Num WHEN NULL THEN 1 ELSE ( CASE MAX(a.My_Num) WHEN NULL THEN 0 Else Max(A.My_Num) END) + b.My_Num END From My_Test A where my_id = 1 ) ,My_Data From My_Test2 B where my_id = 1; this Select gives null back if no Rows are found in the subselect is there a way so i could use max in the case? and if it give null back it should use 0 or 1? greets Auro

    Read the article

  • Do you catch expected exceptions in the controller or business service of your asp.net mvc application

    - by Pascal
    I am developing an asp.net mvc application where user1 could delete data records which were just loaded before by user2. User2 either changes this non-existent data record (Update) or is doing an insert with this data in another table that a foreign-key constraint is violated. Where do you catch such expected exceptions? In the Controller of your asp.net mvc application or in the business service? Just a sidenote: I only catch the SqlException here if its a ForeignKey constraint exception to tell the user that another user has deleted a certain parent record and therefore he can not create the testplan. But this code is not fully implemented yet! Controller:   public JsonResult CreateTestplan(Testplan testplan)   {    bool success = false;    string error = string.Empty;    try   {    success = testplanService.CreateTestplan(testplan);    }   catch (SqlException ex)    {    error = ex.Message;    }    return Json(new { success = success, error = error }, JsonRequestBehavior.AllowGet);   } OR Business service: public Result CreateTestplan(Testplan testplan) { Result result = new Result(); try { using (var con = new SqlConnection(_connectionString)) using (var trans = new TransactionScope()) { con.Open(); _testplanDataProvider.AddTestplan(testplan); _testplanDataProvider.CreateTeststepsForTestplan(testplan.Id, testplan.TemplateId); trans.Complete(); result.Success = true; } } catch (SqlException e) { result.Error = e.Message; } return result; } then in the Controller: public JsonResult CreateTestplan(Testplan testplan)   {    Result result = testplanService.CreateTestplan(testplan);       return Json(new { success = result.success, error = result.error }, JsonRequestBehavior.AllowGet);   }

    Read the article

  • How should I migrate DDL changes from one environment to the next?

    - by Rl
    I make DDL changes using SQL Developer's GUI. Problem is, I need to apply those same changes to the test environment. I'm wondering how others handle this issue. Currently I'm having to manually write ALTER statements to bring the test environment into alignment with the development environment, but this is prone to error (doing the same thing twice). In cases where there's no important data in the test environment I usually just blow everything away, export the DDL scripts from dev and run them from scratch in test. I know there are triggers that can store each DDL change, but this is a heavily shared environment and I would like to avoid that if possible. Maybe I should just write the DDL stuff manually rather than using the GUI?

    Read the article

  • Detemining a database's OS with a SQL query?

    - by KaizenSoze
    I'm writing a tool to gather customer configuration information. One of the questions I want to answer, what OS is the customer database running on. I haven't found a generic way to find the OS with SQL and I can't create stored procedures on the customer's database. If there is a way, it's probably vendor specific. Suggestions? Thanks in advance.

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

  • how to optimize this query?

    - by jest
    hi! the query is: select employee_id, last_name, salary, round((salary+(salary*0.15)), 0) as NewSalary, (round((salary+(salary*0.15)), 0) - salary) as “IncreaseAmount” from employees; can i optimize this round((salary+(salary*0.15)), 0) part in anyway, so that it doesn't appear twice..i tried giving it an alias but didn't work :(

    Read the article

  • How do I insert data into a object relational table with multiple ref in the schema.

    - by Yiling
    I have a table with a schema of Table(number, ref, ref, varchar2, varchar2,...). How would I insert a row of data into this table? When I do: "insert into table values (1, select ref(p), ref(d), '239 F.3d 1343', '35 USC § 283', ... from plaintiff p, defendant d where p.name='name1' and d.name='name2');" I get a "missing expression" error. If I do: "insert into table 1, select ref(p), ref(d), ... from plaintiff p, defendant where p.name=...;" I get a "missing keyword VALUES" error.

    Read the article

  • What tool for managing Oracle DB do you suggest?

    - by Artic
    What tool for managing Oracle DB do you suggest? I need to execute scripts and manage data in tables and develop some scripts and packages. I'v tried SQL developer and actually don't like it. Want some more features for developing (debug, code assist, integrated help and so on.)

    Read the article

  • Indexing XMLType columns

    - by Chris
    Hello, I am working with a XMLType and currently experiencing significant performance issues and would like to incorporate indexing to the column type. Currently I am taking the approach of using the XMLTable() and XQuery functions to create a virtual table. I would like to use this Virtual Table to create a function based index on the table containing the XMLType, but I am receiving this error: Error report: SQL Error: ORA-00907: missing right parenthesis 00907. 00000 - "missing right parenthesis" *Cause: *Action: This is the index.. any assistance would be greatly appreciated. CREATE INDEX indx_medicinalproduct ON d.ProductName XMLTable('for $i at $a in /safetyreport/patient//drug for $j in $i/medicinalproduct return element r { $i/medicinalproduct }' PASSING s.safetyreport COLUMNS ProductName varchar2(70) PATH 'medicinalproduct') d;

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • Performing Inner Join for Multiple Columns in the Same Table

    - by frankiefrank
    I have a scenario which I'm a bit stuck on. Let's say I have a survey about colors, and I have one table for the color data, and another for people's answers. tbColors color_code , color_name 1 , 'blue' 2 , 'green' 3 , 'yellow' 4 , 'red' tbAnswers answer_id , favorite_color , least_favorite_color , color_im_allergic_to 1 , 1 , 2 3 2 , 3 , 1 4 3 , 1 , 1 2 4 , 2 , 3 4 For display I want to write a SELECT that presents the answers table but using the color_name column from tbColors. I understand the "most stupid" way to do it naming tbColors three times in the FROM section, using a different alias for each column to replace. How would a non-stupid way look?

    Read the article

  • SQL query showing missing expression

    - by Ashok Dasari
    Query to pull the data between Yseterday 6AM to today 6AM ... SELECT lot_id, log_time, batch_no, eqp_id, STATION_ID, EXTRACTVALUE (META_DATA, '/lot_info/A3') AS A3, EXTRACTVALUE (META_DATA, '/lot_info/A3Info') AS A3Info, EXTRACTVALUE ( META_DATA, '/lot_info/apc_status_info' ) AS apc_status_info FROM t_dlis_log_history WHERE ( (EQP_ID = 'ALC4360') OR (EQP_ID = 'ALC4361') OR (EQP_ID = 'ALC1360') OR (EQP_ID = 'ALC1361') OR (EQP_ID = 'ALC1362') OR (EQP_ID = 'ALC1363') OR (EQP_ID = 'ALC1364') OR (EQP_ID = 'ALC1365') OR (EQP_ID = 'ALC355') OR (EQP_ID = 'ALC353') OR (EQP_ID = 'ALC4350') OR (EQP_ID = 'ALC354') ) AND (( log_time >= DATEADD ( HOUR, 6, CONVERT(VARCHAR (10), GETDATE (), 110) ) AND ( log_time <= DATEADD ( HOUR, 6, CONVERT(VARCHAR (10), GETDATE () + 1, 110) ) ) ) It is showing error missing expression ...

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >