Search Results

Search found 1369 results on 55 pages for 'where clause'.

Page 22/55 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • LGPL and Dual Licensing Ajax Library

    - by Thomas Hansen
    Hi guys, I'm the previous founder of Gaiaware and Gaia Ajax Widgets and when I used to work there we had this rhetoric (which I have confirmed with some very smart FOSS people is correct) that when using a GPL Ajax library you're basically "distributing" the JavaScript which in turn makes the GPL viral clause kick in and forces people to purchase a proprietary license if they're going to build Closed Source stuff... So now I'm the the LGPL world here with Ra-Ajax which is an LGPL licensed library and I've got no intentions of creating a GPL licensed library since I believe strongly in that the LGPL is the "enabler" of the Open Web to prevail. But something interesting have happened which I think might still give me a "business model" here which is the Linking clause of the LGPL which I think goes something like this (pseudo); "If you link to an LGPL licensed thing you get no restrictions on your own derived works"... But so we started creating something we're calling Ajax Starter-Kits which effectively is a "Project Kickstarter" where you can download a finished project/solution which basically enables you to start out with some pre-done boiler plate code for problems such as Ajax DataGrids, Ajax Calendar Applications, Ajax TreeView Applications etc. And the funny thing is that our users would NOT "link" to these, they would effectively BE our users applications... So to wrap up my question. Would this force users of our LGPL licensed Ajax Starter-Kits to LGPL license their own work? Basically if it does we have a business model (and I get very happy) if not I'd just have to hope people would still like to pay us those $29 for our Starter-Kits to support the project... ;) Help rewarded with extreme gratitude...

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • Linq Query Performance , comparing Compiled query vs Non-Compiled.

    - by AG.
    Hello Guys, I was wondering if i extract the common where clause query into a common expression would it make my query much faster, if i have say something like 10 linq queries on a collection with exact same 1st part of the where clause. I have done a small example to explain a bit more . public class Person { public string First { get; set; } public string Last { get; set; } public int Age { get; set; } public String Born { get; set; } public string Living { get; set; } } public sealed class PersonDetails : List<Person> { } PersonDetails d = new PersonDetails(); d.Add(new Person() {Age = 29, Born = "Timbuk Tu", First = "Joe", Last = "Bloggs", Living = "London"}); d.Add(new Person() { Age = 29, Born = "Timbuk Tu", First = "Foo", Last = "Bar", Living = "NewYork" }); Expression<Func<Person, bool>> exp = (a) => a.Age == 29; Func<Person, bool> commonQuery = exp.Compile(); var lx = from y in d where commonQuery.Invoke(y) && y.Living == "London" select y; var bx = from y in d where y.Age == 29 && y.Living == "NewYork" select y; Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", lx.Single().Age, lx.Single().First , lx.Single().Last, lx.Single().Living, lx.Single().Born ); Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", bx.Single().Age, bx.Single().First, bx.Single().Last, bx.Single().Living, bx.Single().Born); So can some of the guru's here give me some advice if it would be a good practice to write query like var lx = "Linq Expression " or var bx = "Linq Expression" ? Any inputs would be highly appreciated. Thanks, AG

    Read the article

  • LINQ to SQL - Left Outer Join with multiple join conditions

    - by dan
    I have the following SQL which I am trying to translate to LINQ: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid AND f.otherid = 17 WHERE p.companyid = 100 I have seen the typical implementation of the left outer join (ie. into x from y in x.DefaultIfEmpty() etc.) but am unsure how to introduce the other join condition ('AND f.otherid = 17') EDIT Why is the 'AND f.otherid = 17' condition part of the JOIN instead of in the WHERE clause? Because f may not exist for some rows and I still want these rows to be included. If the condition is applied in the WHERE clause, after the JOIN - then I don't get the behaviour I want. Unfortunately this: from p in context.Periods join f in context.Facts on p.id equals f.periodid into fg from fgi in fg.DefaultIfEmpty() where p.companyid == 100 && fgi.otherid == 17 select f.value seems to be equivalent to this: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid WHERE p.companyid = 100 && AND f.otherid = 17 which is not quite what I'm after.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Selenium RC: how to capture/handle error?

    - by KenBurnsFan1
    Hi, My test uses Selenium to loop through a CSV list of URLs via an HTTP proxy (working script below). As I watch the script run I can see about 10% of the calls produce "Proxy error: 502" ("Bad_Gateway"); however, the errors are not captured by my catch-all "except Exception" clause -- ie: instead of writing 'error' in the appropriate row of the "output.csv", they get passed to the else clause and produce a short piece of html that starts: "Proxy error: 502 Read from server failed: Unknown error." Also, if I collect all the URLs which returned 502s and re-run the script, they all pass, which leads me to believe that this is a sporadic network path issue. Question: Can the script be made to recognize the the 502 errors, sleep a minute, and then retry the URL instead of moving on to the next URL in the list? The only alternative that I can think of is to apply re.search("Proxy error: 502") after "get_html_source" as a way to catch the bad calls. Then, if the RE matches, put the script to sleep for a minute and then retry 'sel.open(row[0]' on the URL which produced the 502. Any advice would be much appreciated. Thanks! #python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://baseDomain.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('ListOfSubDomains.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception: ofile = open('output.csv', 'ab') ofile.write("error" + '\n') ofile.close() else: time.sleep(5) html = sel.get_html_source() ofile = open('output.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()

    Read the article

  • Which LINQ expression is faster

    - by Vlad Bezden
    Hi All In following code public class Person { public string Name { get; set; } public uint Age { get; set; } public Person(string name, uint age) { Name = name; Age = age; } } void Main() { var data = new List<Person>{ new Person("Bill Gates", 55), new Person("Steve Ballmer", 54), new Person("Steve Jobs", 55), new Person("Scott Gu", 35)}; // 1st approach data.Where (x => x.Age > 40).ToList().ForEach(x => x.Age++); // 2nd approach data.ForEach(x => { if (x.Age > 40) x.Age++; }); data.ForEach(x => Console.WriteLine(x)); } in my understanding 2nd approach should be faster since it iterates through each item once and first approach is running 2 times: Where clause ForEach on subset of items from where clause. However internally it might be that compiler translates 1st approach to the 2nd approach anyway and they will have the same performance. Any suggestions or ideas? I could do profiling like suggested, but I want to understand what is going on compiler level if those to lines of code are the same to the compiler, or compiler will treat it literally. Thanks in advance for your help.

    Read the article

  • How can I factor out repeated expressions in an SQL Query? Column aliases don't seem to be the ticke

    - by Weston C
    So, I've got a query that looks something like this: SELECT id, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%b %d %Y') as callDate, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='999999-abc-blahblahblah' AND CONVERT_TZ(callTime,'+0:00','-7:00') >= '2010-04-25' AND CONVERT_TZ(callTime,'+0:00','-7:00') <= '2010-05-25' If you're like me, you probably start thinking that maybe it would improve readability and possibly the performance of this query if I wasn't asking it to compute CONVERT_TZ(callTime,'+0:00','-7:00') four separate times. So I try to create a column alias for that expression and replace further occurances with that alias: SELECT id, CONVERT_TZ(callTime,'+0:00','-7:00') as callTimeZoned, DATE_FORMAT(callTimeZoned,'%b %d %Y') as callDate, DATE_FORMAT(callTimeZoned,'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='5999999-abc-blahblahblah' AND callTimeZoned >= '2010-04-25' AND callTimeZoned <= '2010-05-25' This is when I learned, to quote the MySQL manual: Standard SQL disallows references to column aliases in a WHERE clause. This restriction is imposed because when the WHERE clause is evaluated, the column value may not yet have been determined. So, that approach would seem to be dead in the water. How is someone writing queries with recurring expressions like this supposed to deal with it?

    Read the article

  • Writing catch block with cleanup operations in Java ...

    - by kedarmhaswade
    I was not able to find any advise on catch blocks in Java that involve some cleanup operations which themselves could throw exceptions. The classic example is that of stream.close() which we usually call in the finally clause and if that throws an exception, we either ignore it by calling it in a try-catch block or declare it to be rethrown. But in general, how do I handle cases like: public void doIt() throws ApiException { //ApiException is my "higher level" exception try { doLower(); } catch(Exception le) { doCleanup(); //this throws exception too which I can't communicate to caller throw new ApiException(le); } } I could do: catch(Exception le) { try { doCleanup(); } catch(Exception e) { //ignore? //log? } throw new ApiException(le); //I must throw le } But that means I will have to do some log analysis to understand why cleanup failed. If I did: catch(Exception le) { try { doCleanup(); } catch(Exception e) { throw new ApiException(e); } It results in losing the le that got me here in the catch block in the fist place. What are some of the idioms people use here? Declare the lower level exceptions in throws clause? Ignore the exceptions during cleanup operation?

    Read the article

  • JPA2 Criteria API creates invalid SQL when using groupBy

    - by Stephan
    JPA2 with the Criteria API seems to generate invalid SQL for PostgreSQL. For this code: Root<DBObjectAccessCounter> from = query.from(DBObjectAccessCounter.class); Path<DBObject> object = from.get(DBObjectAccessCounter_.object); Expression<Long> sum = builder.sumAsLong(from.get(DBObjectAccessCounter_.count)); query.multiselect(object, sum).groupBy(object); I get the following exception: ERROR: column "dbobject1_.id" must appear in the GROUP BY clause or be used in an aggregate function The generated SQL is: select dbobjectac0_.object_id as col_0_0_, sum(dbobjectac0_.count) as col_1_0_, dbobject1_.id as id1001_, dbobject1_.name as name1013_, dbobject1_.lastChanged as lastChan2_1013_, dbobject1_.type_id as type3_1013_ from DBObjectAccessCounter dbobjectac0_ inner join DBObject dbobject1_ on dbobjectac0_.object_id=dbobject1_.id group by dbobjectac0_.object_id Obviously, the first item of the select statement (dbobjectac0_.object_id) does not match the group by clause. Simplified example It does not even work for this simple example: Root<DBObjectAccessCounter> from = query.from(DBObjectAccessCounter.class); Path<DBObject> object = from.get(DBObjectAccessCounter_.object); query.select(object).groupBy(object); which returns select dbobject1_.id as id924_, dbobject1_.name as name933_, dbobject1_.lastChanged as lastChan2_933_, dbobject1_.type_id as type3_933_ from DBObjectAccessCounter dbobjectac0_ inner join DBObject dbobject1_ on dbobjectac0_.object_id=dbobject1_.id group by dbobjectac0_.object_id Does anyone know how to fix this?

    Read the article

  • Non-latin-characters ordering in database with "order by"

    - by nybon
    I just found some strange behavior of database's "order by" clause. In string comparison, I expected some characters such as '[' and '_' are greater than latin characters such as 'i' considering their orders in the ASCII table. However, the sorting results from database's "order by" clause is different with my expectation. Here's my test: SQLite version 3.6.23 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> create table products(name varchar(10)); sqlite> insert into products values('ipod'); sqlite> insert into products values('iphone'); sqlite> insert into products values('[apple]'); sqlite> insert into products values('_ipad'); sqlite> select * from products order by name asc; [apple] _ipad iphone ipod This behavior is different from Java's string comparison (which cost me some time to find this issue). I can verify this in both SQLite 3.6.23 and Microsoft SQL Server 2005. I did some web search but cannot find any related documentation. Could someone shed me some light on it? Is it a SQL standard? Where can I find some information about this? Thanks in advance.

    Read the article

  • How to preserve order of temp table rows when inner joined with another table?

    - by Triynko
    Does an SQL Server "join" preserve any kind of row order consistently (i.e. that of the left table or that of the right table)? Psuedocode: create table #p (personid bigint); foreach (id in personid_list) insert into #p (personid) values (id) select id from users inner join #p on users.personid = #p.id Suppose I have a list of IDs that correspond to person entries. Each of those IDs may correspond to zero or more user accounts (since each person can have multiple accounts). To quickly select columns from the users table, I populate a temp table with person ids, then inner join it with the users table. I'm looking for an efficient way to ensure that the order of the results in the join matches the order of the ids as they were inserted into the temp table, so that the user list that's returned is in the same order as the person list as it was entered. I've considered the following alternatives: using "#p inner join users", in case the left table's order is preserved using "#p left join users where id is not null", in case a left join preserves order and the inner join doesn't using "create table (rownum int, personid bigint)", inserting an incrementing row number as the temp table is populated, so the results can be ordered by rownum in the join using an SQL Server equivalent of the "order by order of [tablename]" clause available in DB2 I'm currently using option 3, and it works... but I hate the idea of using an order by clause for something that's already ordered. I just don't know if the temp table preserves the order in which the rows were inserted or how the join operates and what order the results come out in.

    Read the article

  • Ruby on Rails - Primary and Foreign key

    - by Eef
    Hey, I am creating a site in Ruby on Rails, I have two models a User model and a Transaction model. These models both belong to an account so they both have a field called account_id I am trying to setup a association between them like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user end I am using these associations like so: user = User.find(1) transactions = user.transactions At the moment the application is trying to find the transactions with the user_id, here is the SQL it generates: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 1) This is incorrect as I would like the find the transactions via the account_id, I have tried setting the associations like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions, :primary_key => :account_id, :class_name => "Transaction" end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user, :foreign_key => :account_id, :class_name => "User" end This almost achieves what I am looking to do and generates the following SQL: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 104) The number 104 is the correct account_id but it is still trying to query the transaction table for a user_id field. Could someone give me some advice on how I setup the associations to query the transaction table for the account_id instead of the user_id resulting in a SQL query like so: SELECT * FROM `transactions` WHERE (`transactions`.account_id = 104) Cheers Eef

    Read the article

  • Using conditionals in Linq Programatically

    - by Mike B
    I was just reading a recent question on using conditionals in Linq and it reminded me of an issue I have not been able to resolve. When building Linq to SQL queries programatically how can this be done when the number of conditionals is not known until runtime? For instance in the code below the first clause creates an IQueryable that, if executed, would select all the tasks (called issues) in the database, the 2nd clause will refine that to just issues assigned to one department if one has been selected in a combobox (Which has it's selected item bound to the departmentToShow property). How could I do this using the selectedItems collection instead? IQueryable<Issue> issuesQuery; // Will select all tasks issuesQuery = from i in db.Issues orderby i.IssDueDate, i.IssUrgency select i; // Filters out all other Departments if one is selected if (departmentToShow != "All") { issuesQuery = from i in issuesQuery where i.IssDepartment == departmentToShow select i; } By the way, the above code is simplified, in the actual code there are about a dozen clauses that refine the query based on the users search and filter settings.

    Read the article

  • Odd 'UNION' behavior in an Oracle SQL query

    - by RenderIn
    Here's my query: SELECT my_view.* FROM my_view WHERE my_view.trial in (select 2 as trial_id from dual union select 3 from dual union select 4 from dual) and my_view.location like ('123-%') When I execute this query it returns results which do not conform to the my_view.location like ('123-%') condition. It's as if that condition is being ignored completely. I can even change it to my_view.location IS NULL and it returns the same results, despite that field being not-nullable. I know this query seems ridiculous with the selects from dual, but I've structured it this way to replicate a problem I have when I use a 'WITH' clause (the results of that query are where the selects from dual inline view are). I can modify the query like so and it returns the expected results: SELECT my_view.* FROM my_view WHERE my_view.trial in (2, 3, 4) and my_view.location like ('123-%') Unfortunately I do not know the trial values up front (they are queried for in a 'WITH' clause) so I cannot structure my query this way. What am I doing wrong? I will say that the my_view view is composed of 3 other views whose results are UNION ALL and each of which retrieve some data over a DB Link. Not that I believe that should matter, but in case it does.

    Read the article

  • Doctrine/symfony: getSqlQuery() output in phpMyAdmin/SQL tab

    - by user248959
    Hi, i have created this query that works OK: $q1 = Doctrine_Query::create() ->from('Usuario u') ->leftJoin('u.AmigoUsuario a ON u.id = a.user2_id OR u.id = a.user1_id') ->where("a.user2_id = ? OR a.user1_id = ?", array($id,$id)) ->andWhere("u.id <> ?", $id) ->andWhere("a.estado LIKE ?", 1); echo $q1->getSqlQuery(); The calling to getSqlQuery outputs this clause: SELECT s.id AS s_id, s.username AS s_username, s.algorithm AS s_algorithm, s.salt AS s_salt, s.password AS s__password, s.is_active AS s__is_active, s.is_super_admin AS s__is_super_admin, s.last_login AS s__last_login, s.email_address AS s__email_address, s.nombre_apellidos AS s__nombre_apellidos, s.sexo AS s__sexo, s.fecha_nac AS s__fecha_nac, s.provincia AS s_provincia, s.localidad AS s_localidad, s.fotografia AS s_fotografia, s.avatar AS s_avatar, s.avatar_mensajes AS s__avatar_mensajes, s.created_at AS s__created_at, s.updated_at AS s__updated_at, a.id AS a__id, a.user1_id AS a__user1_id, a.user2_id AS a__user2_id, a.estado AS a__estado FROM sf_guard_user s LEFT JOIN amigo_usuario a ON ((s.id = a.user2_id OR s.id = a.user1_id)) WHERE ((a.user2_id = ? OR a.user1_id = ?) AND s.id < ? AND a.estado LIKE ?) If i take that clause to phpmyadmin SQL tab i get this error 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? OR a.user1_id = ?) AND s.id < ? AND a.estado LIKE ?) LIMIT 0, 30' at line 1 Why i'm getting this error? Regards Javi

    Read the article

  • How to write this function as a pL/pgSQl function ?

    - by morpheous
    I am trying to implement some business logic in a PL/pgSQL function. I have hacked together some pseudo code that explains the type of business logic I want to include in the function. Note: This function returns a table, so I can use it in a query like: SELECT A.col1, B.col1 FROM (SELECT * from some_table_returning_func(1, 1, 2, 3)) as A, tbl2 as B; The pseudocode of the pl/PgSQL function is below: CREATE FUNCTION some_table_returning_func(uid int, type_id int, filter_type_id int, filter_id int) RETURNS TABLE AS $$ DECLARE where_clause text := 'tbl1.id = ' + uid; ret TABLE; BEGIN switch (filter_type_id) { case 1: switch (filter_id) { case 1: where_clause += ' AND tbl1.item_id = tbl2.id AND tbl2.type_id = filter_id'; break; //other cases follow ... } break; //other cases follow ... } // where clause has been built, now run query based on the type ret = SELECT [COL1, ... COLN] WHERE where_clause; IF (type_id <> 1) THEN return ret; ELSE return select * from another_table_returning_func(ret,123); ENDIF; END; $$ LANGUAGE plpgsql; I have the following questions: How can I write the function correctly to (i.e. EXECUTE the query with the generated WHERE clause, and to return a table How can I write a PL/pgSQL function that accepts a table and an integer and returns a table (another_table_returning_func) ?

    Read the article

  • how do I deconstruct COUNT()?

    - by user151841
    I have a view with some joins in it. I'm doing a select from that view with COUNT(*) as one of the columns of the select. I'm surprised by the number it's returning. Note that there is no GROUP BY nor aggregate column statement in the source view that the query is drawing from. How can I take it apart to see how it arrives at this number? I have three columns in the GROUP BY clause. SELECT column1, column2, column3, COUNT(*) FROM View GROUP BY column1, column2, column3 I get a result like +---------+---------+---------+----------+ | column1 | column2 | column3 | COUNT(*) | +---------+---------+---------+----------+ | value1 | valueA | value_a | 103 | +---------+---------+---------+----------+ | value2 | valueB | value_b | 56 | +---------+---------+---------+----------+ etc. I'd like to see how it arrives at that 103, 26, etc. In other words, I want to run a query that returns 103 rows of something, so that I know that I've expressed the query properly. I'm double-checking my work. I'm not saying that I think COUNT(*) doesn't work ( I know that "SELECT is not broken" ), what I want to double-check is exactly what I'm expressing in my query, because I think I've expressed the wrong thing, which would be why I'm getting unexpected values. I need to see more what I'm actually directing MySQL to count. So should I take them one by one, and try out each value in a WHERE clause? In other words, should I do SELECT column1 FROM View WHERE column1 = 'first_grouped_value' SELECT column1 FROM View WHERE column1 = 'second_grouped_value' SELECT column2 FROM View WHERE column1 = 'first_grouped_value' SELECT column2 FROM View WHERE column1 = 'second_grouped_value' and see the row count returned matches the COUNT(*) value in the grouped results? Because of confidentiality, I won't be able to post any of the query or database structure. All I'm asking for is a general technique to see what COUNT(*) is actually counting.

    Read the article

  • NHibernate query against the key field of a dictionary (map)

    - by Carl Raymond
    I have an object model where a Calendar object has an IDictionary<MembershipUser, Perms> called UserPermissions, where MembershipUser is an object, and Perms is a simple enumeration. This is in the mapping file for Calendar as <map name="UserPermissions" table="CalendarUserPermissions" lazy="true" cascade="all"> <key column="CalendarID"/> <index-many-to-many class="MembershipUser" column="UserGUID" /> <element column="Permissions" type="CalendarPermission" not-null="true" /> </map> Now I want to execute a query to find all calendars for which a given user has some permission defined. The permission is irrelevant; I just want a list of the calendars where a given user is present as a key in the UserPermissions dictionary. I have the username property, not a MembershipUser object. How do I build that using QBC (or HQL)? Here's what I've tried: ISession session = SessionManager.CurrentSession; ICriteria calCrit = session.CreateCriteria<Calendar>(); ICriteria userCrit = calCrit.CreateCriteria("UserPermissions.indices"); userCrit.Add(Expression.Eq("Username", username)); return calCrit.List<Calendar>(); This constructed invalid SQL -- the WHERE clause contained WHERE membership1_.Username = @p0 as expected, but the FROM clause didn't include the MemberhipUsers table. Also, I really had to struggle to learn about the .indices notation. I found it by digging through the NHibernate source code, and saw that there's also .elements and some other dotted notations. Where's a reference to the allowed syntax of an association path? I feel like what's above is very close, and just missing something simple.

    Read the article

  • Entity Sql Group By problem, please help

    - by Zviadi
    Hello, help me please with this simple E-sql query: var qStr = "SELECT SqlServer.Month(o.DatePaid) as month, SqlServer.Sum(o.PaidMoney) as PaidMoney FROM XACCModel.OrdersIncomes as o group by SqlServer.Month(o.DatePaid)"; heres what I have. I have simple Entity called OrdersIncomes with ID,PaidMoney,DatePaid,Order_ID properties I want to select Month and Summed PaidMoney like this: month Paidmoney 1 500 2 700 3 1200 T-SQL looks like this and works fine: select MONTH(o.DatePaid), SUM(o.PaidMoney) from OrdersIncomes as o group by MONTH(o.DatePaid) results: 3 31.0000 4 127.0000 5 20.0000 (3 row(s) affected) but E-SQL doesnot work and I dont know what to do. here my E-SQL which needs refactoring: var qStr = "SELECT SqlServer.Month(o.DatePaid) as month, SqlServer.Sum(o.PaidMoney) as PaidMoney FROM XACCModel.OrdersIncomes as o group by SqlServer.Month(o.DatePaid)"; theres exception: ErrorDescription = "The identifier 'o' is not valid because it is not contained either in an aggregate function or in the GROUP BY clause." if I include o in group by clause, like: FROM XACCModel.OrdersIncomes as o group by o then I dont get summed and agregated results. is this some bug? or what Im doing wrong. heres Linq to Entities query and it works too: var incomeResult = from ic in _context.OrdersIncomes group ic by ic.DatePaid.Month into gr select new { Month = gr.Key, PaidMoney = gr.Sum(i = i.PaidMoney) };

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • Questions on Juval Lowy's IDesign C# Coding Standard

    - by Jan
    We are trying to use the IDesign C# Coding standard. Unfortunately, I found no comprehensive document to explain all the rules that it gives, and also his book does not always help. Here are the open questions that remain for me (from chapter 2, Coding Practices): No. 26: Avoid providing explicit values for enums unless they are integer powers of 2 No. 34: Always explicitly initialize an array of reference types using a for loop No. 50: Avoid events as interface members No. 52: Expose interfaces on class hierarchies No. 73: Do not define method-specific constraints in interfaces No. 74: Do not define constraints in delegates Here's what I think about those: I thought that providing explicit values would be especially useful when adding new enum members at a later point in time. If these members are added between other already existing members, I would provide explicit values to make sure the integer representation of existing members does not change. No idea why I would want to do this. I'd say this totally depends on the logic of my program. I see that there is alternative option of providing "Sink interfaces" (simply providing already all "OnXxxHappened" methods), but what is the reason to prefer one over the other? Unsure what he means here: Could this mean "When implementing an interface explicitly in a non-sealed class, consider providing the implementation in a protected virtual method that can be overridden"? (see Programming .NET Components 2nd Edition, end of chapter “Interfaces and Class Hierarchies”). I suppose this is about providing a "where" clause when using generics, but why is this bad on an interface? I suppose this is about providing a "where" clause when using generics, but why is this bad on a delegate?

    Read the article

  • Why can't I handle a KeyboardInterrupt in python?

    - by Josh
    I'm writing python 2.6.6 code on windows that looks like this: try: dostuff() except KeyboardInterrupt: print "Interrupted!" except: print "Some other exception?" finally: print "cleaning up...." print "done." dostuff() is a function that loops forever, reading a line at a time from an input stream and acting on it. I want to be able to stop it and clean up when I hit ctrl-c. What's happening instead is that the code under except KeyboardInterrupt: isn't running at all. The only thing that gets printed is "cleaning up...", and then a traceback is printed that looks like this: Traceback (most recent call last): File "filename.py", line 119, in <module> print 'cleaning up...' KeyboardInterrupt So, exception handling code is NOT running, and the traceback claims that a KeyboardInterrupt occurred during the finally clause, which doesn't make sense because hitting ctrl-c is what caused that part to run in the first place! Even the generic except: clause isn't running. EDIT: Based on the comments, I replaced the contents of the try: block with sys.stdin.read(). The problem still occurs exactly as described, with the first line of the finally: block running and then printing the same traceback.

    Read the article

  • oracle select query - index on multiple columns

    - by CC
    Hello. I'm working on a sql query, and trying to optimise it, because it takes too long to execute. I have a few select and UNION between. Every select is on the same table but with different condition in WHERE clause. Basically I have allways something like : select * from A where field1 <"toto" and field2 IN (...) UNION select * from A where field1 >"toto2" and field2 =(...) UNION .... I have a index on field1 (it a date field, and field2 is a number). Now, when I do the select and if I put only WHERE field1 <'12/12/2010' it does not use the index. I'm using Toad to see the explain plain and it said: SELECT STAITEMENT Optimiser Mode = CHOOSE TABLE ACCESS FULL It is a huge table, and the index on this column is there. Any idea about this optimiser ? And why it does not uses the index ? Another question is , if I have where clause on field1 and field2 , I have to create only one index, or one index for each field ? Thanks alot.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >