Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 19/1369 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Erlang ODBC parameter query with null parameters

    - by Schlomer
    Is it possible to pass null values to parameter queries? For example Sql = "insert into TableX values (?,?)". Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}]. % Val2 may be a float, or it may be the atom, undefined odbc:param_query(OdbcRef, Sql, Params). Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as: Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)". with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there. Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it. Having a parameterized query for each permutation is simply not an option. Any thoughts or ideas would be fantastic! Thank you!

    Read the article

  • Doctrine/symfony: getSqlQuery() output in phpMyAdmin/SQL tab

    - by user248959
    Hi, i have created this query that works OK: $q1 = Doctrine_Query::create() ->from('Usuario u') ->leftJoin('u.AmigoUsuario a ON u.id = a.user2_id OR u.id = a.user1_id') ->where("a.user2_id = ? OR a.user1_id = ?", array($id,$id)) ->andWhere("u.id <> ?", $id) ->andWhere("a.estado LIKE ?", 1); echo $q1->getSqlQuery(); The calling to getSqlQuery outputs this clause: SELECT s.id AS s_id, s.username AS s_username, s.algorithm AS s_algorithm, s.salt AS s_salt, s.password AS s__password, s.is_active AS s__is_active, s.is_super_admin AS s__is_super_admin, s.last_login AS s__last_login, s.email_address AS s__email_address, s.nombre_apellidos AS s__nombre_apellidos, s.sexo AS s__sexo, s.fecha_nac AS s__fecha_nac, s.provincia AS s_provincia, s.localidad AS s_localidad, s.fotografia AS s_fotografia, s.avatar AS s_avatar, s.avatar_mensajes AS s__avatar_mensajes, s.created_at AS s__created_at, s.updated_at AS s__updated_at, a.id AS a__id, a.user1_id AS a__user1_id, a.user2_id AS a__user2_id, a.estado AS a__estado FROM sf_guard_user s LEFT JOIN amigo_usuario a ON ((s.id = a.user2_id OR s.id = a.user1_id)) WHERE ((a.user2_id = ? OR a.user1_id = ?) AND s.id < ? AND a.estado LIKE ?) If i take that clause to phpmyadmin SQL tab i get this error 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? OR a.user1_id = ?) AND s.id < ? AND a.estado LIKE ?) LIMIT 0, 30' at line 1 Why i'm getting this error? Regards Javi

    Read the article

  • SQL Query to duplicate records based on If statement

    - by user328371
    Hi, I'm trying to write an SQL query that will duplicate records depending on a field in another table. I am running mySQL 5. (I know duplicating records shows that the database structure is bad, but I did not design the database and am not in a position to redo it all - it's a shopp ecommerce database running on wordpress.) Each product with a particular attribute needs a link to the same few images, so the product will need a row per image in a table - the database doesn't actually contain the image, just its filename. (the images are of clipart for a customer to select from) Based on these records... SELECT * FROM `wp_shopp_spec` WHERE name='Can Be Personalised' and content='Yes' I want to do something like this.. For each record that matches that query, copy records 5134 - 5139 from wp_shopp_asset but change the id so it's unique and set the cell in column 'parent' to have the value of 'product' from the table wp_shopp_spec. This will mean 6 new records are created for each record matching the above query, all with the same value in 'parent' but with unique ids and every other column copied from the original (ie. records 5134-5139) Hope that's clear enough - any help greatly appreciated.

    Read the article

  • Trouble creating a SQL query

    - by JoBu1324
    I've been thinking about how to compose this SQL query for a while now, but after thinking about it for a few hours I thought I'd ask the SO community to see if they have any ideas. Here is a mock up of the relevant portion of the tables: contracts id date ar (yes/no) term payments contract_id payment_date The object of the query is to determine, per month, how many payments we expect, vs how many payments we received. conditions for expecting a payment Expected payments begin on contracts.term months after contracts.date, if contracts.ar is "yes". Payments continue to be expected until the month after the first missed payment. There is one other complication to this: payments might be late, but they need to show up as if they were paid on the date expected. The data is all there, but I've been having trouble wrapping my head around the SQL query. I am not an SQL guru - I merely have a decent amount of experience handling simpler queries. I'd like to avoid filtering the results in code, if possible - but without your help that may be what I have to do. Expected Output Month Expected Payments Received Payments January 500 450 February 498 478 March 234 211 April 987 789 ... SQL Fiddle I've created an SQL Fiddle: http://sqlfiddle.com/#!2/a2c3f/2

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • how to query an embedded entity by using a query builder

    - by user577719
    I've searched quite a time for an answer to this question. Following Codesmell: @Entity public class Person { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) protected Integer id; @Column(nullable = true, length = 50) @Size(max = 50) private String name; @Embedded @Valid protected Adress adress; public void setId(Integer id) { this.id = id; } public Integer getId() { return this.id; } public void setName(String name) { this.name = name; } public void getName() { return this.name; } public void setAdress(Adress adress) { this.adress = adress; } public void getAdress() { return this.adress; } } @Embeddable public class Adress { @Column(nullable = false, length = 50) @Size(max = 50) @NotNull private String place; public void setPlace(String place) { this.place = place; } public void getPlace() { return this.place; } } public class PersonDaoJpa { public List<Ort> findByPerson(final Person person) { CriteriaBuilder builder = this.entityManager.getCriteriaBuilder(); CriteriaQuery<Person> query = builder.createQuery(Person.class); Root<Person> rootPerson = query.from(Person.class); List<Predicate> wherePredicates = new ArrayList<Predicate>(); if (person.getName() != null) { wherePredicates.add( builder.like(builder.lower(rootPerson.<String>get("name")), ort.getName().toLowerCase()) ); } Adresse adresse = ort.getAdresse(); if (adresse != null) { if(adresse.getPlace() != null) { // this won't work wherePredicates.add( builder.like(builder.lower(rootPerson.<String>get("person.adress.place")), adresse.getPlace().toLowerCase()) ); } } Predicate whereClause = builder.and(wherePredicates.toArray(new Predicate[0])); query.where(whereClause); return this.entityManager.createQuery(query).getResultList(); } } How can I access the Adress.place through rootPerson? rootPerson.get("place"), or rootPerson.get("adress.place") won't work...

    Read the article

  • Ultra-grand super acts_as_tree rails query

    - by Bloudermilk
    Right now I'm dealing with an issue regarding an intense acts_as_tree MySQL query via rails. The model I am querying is Foo. A Foo can belong to any one City, State or Country. My goal is to query Foos based on their location. My locations table is set up like so: I have a table in my database called locations I use a combination of acts_as_tree and polymorphic associations to store each individual location as either a City, State or Country. (This means that my table consists of the rows id, name, parent_id, type) Let's say for instance, I want to query Foos in the state "California". Beside Foos that directly belong to "California", I should get all Foos that belong every City in "California" like Foos in "Los Angeles" and "San Francisco". Not only that, but I should get any Foos that belong to the Country that "California" is in, "United States". I've tried a few things with associations to no avail. I feel like I'm missing some super-helpful Rails-fu here. Any advice?

    Read the article

  • Why doesn't this CompiledQuery give a performance improvement?

    - by Grammarian
    I am trying to speed up an often used query. Using a CompiledQuery seemed to be the answer. But when I tried the compiled version, there was no difference in performance between the compiled and non-compiled versions. Can someone please tell me why using Queries.FindTradeByTradeTagCompiled is not faster than using Queries.FindTradeByTradeTag? static class Queries { // Pre-compiled query, as per http://msdn.microsoft.com/en-us/library/bb896297 private static readonly Func<MyEntities, int, IQueryable<Trade>> mCompiledFindTradeQuery = CompiledQuery.Compile<MyEntities, int, IQueryable<Trade>>( (entities, tag) => from trade in entities.TradeSet where trade.trade_tag == tag select trade); public static Trade FindTradeByTradeTagCompiled(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = mCompiledFindTradeQuery(entities, tag); return tradeQuery.FirstOrDefault(); } public static Trade FindTradeByTradeTag(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = from trade in entities.TradeSet where trade.trade_tag == tag select trade; return tradeQuery.FirstOrDefault(); } }

    Read the article

  • Entity framework 4.0 compiled query with Where() clause issue

    - by Andrey Salnikov
    Hello, I encountered with some strange behavior of System.Data.Objects.CompiledQuery.Compile function - here is my code for compile simple query: private static readonly Func<DataContext, long, Product> productQuery = CompiledQuery.Compile((DataContext ctx, long id) => ctx.Entities.OfType<Data.Product>().Where(p => p.Id == id) .Select(p=>new Product{Id = p.Id}).SingleOrDefault()); where DataContext inherited from ObjectContext and Product is a projection of POCO Data.Product class. My data context in first run contains Data.Product {Id == 1L} and in second Data.Product {Id == 2L}. First using of compilled query productQuery(dataContext, 1L) works perfect - in result I have Product {Id == 1L} but second run productQuery(dataContext, 2L) always returns null, instead of context in second run contains single product with id == 2L. If I remove Where clause I will get correct product (with id == 2L). It seems that first id value caching while first run of productQuery, and therefore all further calls valid only when dataContext contains Data.Product {id==1L}. This issue can't be reproduced if I've used direct query instead of its precompiled version. Also, all tests I've performed on test mdf base using SQL Server 2008 express and Visual studio 2010 final from my ASP.net application.

    Read the article

  • Help converting subquery to query with joins

    - by Tim
    I'm stuck on a query with a join. The client's site is running mysql4, so a subquery isn't an option. My attempts to rewrite using a join aren't going too well. I need to select all of the contractors listed in the contractors table who are not in the contractors2label table with a given label ID & county ID. Yet, they might be listed in contractors2label with other label and county IDs. Table: contractors cID (primary, autonumber) company (varchar) ...etc... Table: contractors2label cID labelID countyID psID This query with a subquery works: SELECT company, contractors.cID FROM contractors WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors.cID NOT IN ( SELECT contractors2label.cID FROM contractors2label WHERE labelID <> 1 AND countyID <> 1 ) I thought this query with a join would be the equivalent, but it returns no results. A manual scan of the data shows I should get 34 rows, which is what the subquery above returns. SELECT company, contractors.cID FROM contractors LEFT OUTER JOIN contractors2label ON contractors.cID = contractors2label.cID WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors2label.labelID <> 1 AND contractors2label.countyID <> 1 AND contractors2label.cID IS NULL

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • Converting java language output to Joomla language output

    - by jax
    in java if I run : Locale.getDefault().toString() I get zh_tw I am sending this to a joomla site and setting the language like this: $lang = &JFactory::getLanguage(); $lang->setLanguage( $_GET['lang'] ); $lang->load(); however the site requires the following format zh-TW It appears that if it is not in that exact format the language will not change. Is there a function somewhere in java or php that will convert the format for me? I realise that I could write the method myself like this: public static String convertLanguageToJoomlaFormat(String lang) { String[] parts = lang.split("_"); if(parts.length ==2) return parts[0]+"-"+parts[1].toUpperCase(); return lang; } but am unsure if there are any cases where the format changes for particular languages.

    Read the article

  • please suggest mysql query for this

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, or if we use subquery value to main query? but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • SQL Server-Determine which query is taking a long time to complete

    - by Neil Smith
    Cool little trick to determine which sql query which is taking a long time to execute, first while offending query is running from another machine do EXEC sp_who2 Locate the SPID responsible via Login, DBName and ProgramName columns, then do DBCC INPUTBUFFER (<SPID>) The offending query will be in the EventInfo column.  This is a great little time saver for me, before I found out about this I used to split my concatenated query script in to multiple sql files until I located the problem query

    Read the article

  • Lookup for data sources in a query

    - by DAXShekhar
    public static str lookupDatasourceOfQuery(Query _query) {     Query                   query = _query;     QueryBuildDataSource    qbds;     int                     dsIterator;     Map                     map = new Map(Types::String, Types::String);     ;     for (dsIterator = 1; dsIterator <= query.dataSourceCount(); dsIterator++)     {         qbds = query.dataSourceNo(dsIterator);         map.insert(qbds.name(), qbds.name());     }     return pickList(map, "Data source", "Data sources"); }

    Read the article

  • Can someone explain to me why my output is this? And how would I correct my output?

    - by user342231
    /* in this slice of code I get an output of bbb 55 66 77 88 aaa the output I expect and want is bbb 55 66 77 88 bbb because I reassign ss from log[0] to log[1] So my question is why is the output different from what I expect and how do I change it to what I want? */ int w,x,y,z; stringstream ss (stringstream::in | stringstream::out); string word; string log[2]; log[0]="aaa 11 22 33 44"; log[1]="bbb 55 66 77 88"; ss<<log[0]; ss>>word; int k=0; ss>>w>>x>>y>>z; k++; ss<<log[k]; cout<<log[k]<<endl; ss>>word; cout<<word<<endl; return 0;

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008

    - by pinaldave
    After successfully delivering many corporate trainings as well as the private training Solid Quality Mentors, India is launching the Public Training in Hyderabad for SQL Server 2008 and SharePoint 2010. This is going to be one of the most unique and one-of-a-kind events in India where Solid Quality Mentors are offering public classes. I will be leading the training on Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. This intensive, 3-day course intends to give attendees an in-depth look at Query Optimization and Performance Tuning in SQL Server 2005 and 2008. Designed to prepare SQL Server developers and administrators for a transition into SQL Server 2005 or 2008, the course covers the best practices for a variety of essential tasks in order to maximize the performance. At the end of the course, there would be daily discussions about your real-world problems and find appropriate solutions. Note: Scroll down for course fees, discount, dates and location. Do not forget to take advantage of Discount code ‘SQLAuthority‘. The training premises are very well-equipped as they will be having 1:1 computers. Every participant will be provided with printed course materials. I will pick up your entire lunch tab and we will have lots of SQL talk together. The best participant will receive a special gift at the end of the course. Even though the quality of the material to be delivered together with the course will be of extremely high standard, the course fees are set at a very moderate rate. The fee for the course is INR 14,000/person for the whole 3-day convention. At the rate of 1 USD = 44 INR, this fee converts to less than USD 300. At this rate, it is totally possible to fly from anywhere from the world to India and take the training and still save handsome pocket money. It would be even better if you register using the discount code “SQLAuthority“, for you will instantly get an INR 3000 discount, reducing the total cost of the training to INR 11,000/person for whole 3 days course. This is a onetime offer and will not be available in the future. Please note that there will be a 10.3% service tax on course fees. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at [email protected] for any additional information and clarification. Training Date and Time: May 12-14, 2010 10 AM- 6 PM. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. The details of the course is as listed below. Day 1 : Strengthen the basics along with SQL Server 2005/2008 New Features Module 01: Subqueries, Ranking Functions, Joins and Set Operations Module 02: Table Expressions Module 03: TOP and APPLY Module 04: SQL Server 2008 Enhancements Day 2: Query Optimization & Performance Tuning 1 Module 05: Logical Query Processing Module 06: Query Tuning Module 07:  Introduction to the Query Processor Module 08:  Review of common query coding which causes poor performance Day 3: Query Optimization & Performance Tuning 2 Module 09:  SQL Server Indexing and index maintenance Module 10:  Plan Guides, query hints, UDFs, and Computed Columns Module 11:  Understanding SQL Server Execution Plans Module 12: Real World Index and Optimization Tips Download the complete PDF brochure. We are also going to have SharePoint 2010 training by Joy Rathnayake on 10-11 May. All the details for discount applies to the same as well. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • "Enumeration yielded no results" When using Query Syntax in C#

    - by Shantanu Gupta
    I have created this query to fetch some result from database. Here is my table structure. What exaclty is happening. DtMapGuestDepartment as Table 1 DtDepartment as Table 2 Are being used var dept_list= from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select dept.Field<string>("DEPARTMENT_ID"); I am performing this query on DataTables and expect it to return me a datatable. Here I want to select distinct department from Table 1 as well which will be my next quest. Please answer to that also if possible.

    Read the article

  • Using variable from Code-behind in SQL Query, asp.net C#

    - by Carl
    I am trying to use a variable from the code-behind page in a sqlDataSource WHERE statement. I am using Membership.GetUser().ProviderUserKey to obtain the UserId of the logged in user and I would like to only show record that have this UserId as a foreign key. How can I write the SQL for this? Thanks much, Carl Here is what I have so far: SQL Query: SelectCommand="SELECT [UserID], [CarID], [Make], [Model], [Year] FROM [Vehicle]" Code-Behind String user2 = ((Guid)Membership.GetUser().ProviderUserKey).ToString(); And I want to add WHERE UserID = user2 to the SQL Query.

    Read the article

  • MySQL query returns different set of results on two identical databases

    - by 1nsane
    I exported a live MySQL database (running mysql 5.0.45) to a local copy (running mysql 5.1.33) with no errors upon import. There is a view in the database, that when executed locally, returns a different set of data than when executed remotely. It's returning 32 results instead of 63. When I execute the raw sql, the same problem occurs. I've inspected the data in all tables being joined, and the counts are the same. The query is simple and has no where conditions - but about 10 joins. Aside from the differences in mysql versions... I can't find any reason that this query would return different results between databases... since they are effectively exact copies. Has anyone experienced a problem like this before?

    Read the article

  • How to perform a Linq2Sql query on the following dataset

    - by Bas
    I have the following tables: Person(Id, FirstName, LastName) { (1, "John", "Doe"), (2, "Peter", "Svendson") (3, "Ola", "Hansen") (4, "Mary", "Pettersen") } Sports(Id, Name) { (1, "Tennis") (2, "Soccer") (3, "Hockey") } SportsPerPerson(Id, PersonId, SportsId) { (1, 1, 1) (2, 1, 3) (3, 2, 2) (4, 2, 3) (5, 3, 2) (6, 4, 1) (7, 4, 2) (8, 4, 3) } Looking at the tables, we can conclude the following facts: John plays Tennis John plays Hockey Peter plays Soccer Peter plays Hockey Ola plays Soccer Mary plays Tennis Mary plays Soccer Mary plays Hockey Now I would like to create a Linq2Sql query which retrieves the following: Get all Persons who play Hockey and Soccer Executing the query should return: Peter and Mary Anyone has any idea's on how to approach this in Linq2Sql?

    Read the article

  • Linq query challenge

    - by vdh_ant
    My table structure is as follows: Person 1-M PesonAddress Person 1-M PesonPhone Person 1-M PesonEmail Person 1-M Contract Contract M-M Program Contract M-1 Organization At the end of this query I need a populated object graph where each person has their: PesonAddress's PesonPhone's PesonEmail's PesonPhone's Contract's - and this has its respective Program's Now I had the following query and I thought that it was working great, but it has a couple of problems: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") where people.Contract.Any( contract => (param.OrganizationId == contract.OrganizationId) && contract.Program.Any( contractProgram => (param.ProgramId == contractProgram.ProgramId))) select people; The problem is that it filters the person to the criteria but not the Contracts or the Contract's Programs. It brings back all Contracts that each person has not just the ones that have an OrganizationId of x and the same goes for each of those Contract's Programs respectively. What I want is only the people that have at least one contract with an OrgId of x with and where that contract has a Program with the Id of y... and for the object graph that is returned to have only the contracts that match and programs within that contract that match. I kinda understand why its not working, but I don't know how to change it so it is working... This is my attempt thus far: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") let currentContracts = from contract in people.Contract where (param.OrganizationId == contract.OrganizationId) select contract let currentContractPrograms = from contractProgram in currentContracts let temp = from x in contractProgram.Program where (param.ProgramId == contractProgram.ProgramId) select x where temp.Any() select temp where currentContracts.Any() && currentContractPrograms.Any() select new Person { PersonId = people.PersonId, FirstName = people.FirstName, ..., ...., MiddleName = people.MiddleName, Surname = people.Surname, ..., ...., Gender = people.Gender, DateOfBirth = people.DateOfBirth, ..., ...., Contract = currentContracts, ... }; //This doesn't work But this has several problems (where the Person type is an EF object): I am left to do the mapping by myself, which in this case there is quite a lot to map When ever I try to map a list to a property (i.e. Scholarship = currentScholarships) it says I can't because IEnumerable is trying to be cast to EntityCollection Include doesn't work Hence how do I get this to work. Keeping in mind that I am trying to do this as a compiled query so I think that means anonymous types are out.

    Read the article

  • I can't make this query work with SUM function

    - by Mehper C. Palavuzlar
    This query gives an error: select ep, case when ob is null and b2b_ob is null then 'a' when ob is not null or b2b_ob is not null then 'b' else null end as type, sum(b2b_d + b2b_t - b2b_i) as sales from table where ... group by ep, type Error: ORA-00904: "TYPE": invalid identifier When I run it with group by ep, the error message becomes: ORA-00979: not a GROUP BY expression The whole query works OK if I remove the lines sum(b2b_d+b2b_t-b2b_i) as sales and group by ..., so the problem should be related to SUM and GROUP BY functions. How can I make this work? Thanks in advance for your help.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >