Search Results

Search found 18542 results on 742 pages for 'nhibernate query'.

Page 141/742 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Mysql - GROUP BY Avoid using tempoary

    - by jwzk
    The goal of this query is to get a total of unique records (by IP) per ref ID. SELECT COUNT(DISTINCT ip), GROUP_CONCAT(ref.id) FROM `sess` sess JOIN `ref` USING(row_id) WHERE sess.time BETWEEN '2010-04-21 00:00:00' AND '2010-04-21 23:59:59' GROUP BY ref.id ORDER BY sess.time DESC The query works fine, but its using a temporary table. Any ideas?

    Read the article

  • NUL symbol gets inserted anywhere in the XML

    - by Racs
    I am using Oracle 11g and when I query data from database manually, it looks ok. But when I view it using Notepad++, there is a special character that gets inserted anywhere in the xml produced. Please see image below: I tried everything and deleted some parts of the xml, updated the db and query again but the invalid character just differ in location. Any idea is much appreciated, thanks! Best regards, Racs

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • Trying to Set Up SSH Tunneling To MySQL Server for MySQL Query Browser

    - by Teno
    I'm trying to set up SSH tunneling on a remote web server to another MySQL server so that the database can be browsed easily with MySQL Query Browser. I'm following this page but cannot connect to the MySQL server. http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ What I've done: logged in to the web server with Putty via SSH. typed ssh -L 33060:[database]:3306 [myusername]@[webserver_address] where [...]s are altered by the actual information. I was asked a password and typed it and got the following message. So it seems login was successful. socket: Protocol not supported Last login: .... 2012 from .... Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 7.1-RELEASE.... Welcome to FreeBSD! Opened MySQL Query Browser in Windows and entered Server Host: localhost Port: 33060 UserName: myusername PassWord: mypassword And it says, Could not connect to the specified instance. MySQL Error Number 2003 Can't connect to MySQL Server on 'localhost' (10061) Sorry if this is too basic. Thanks for your information.

    Read the article

  • DNS something is wrong?

    - by Nickolas R.
    Hello I am configuring bind9 on a server with two network interfaces, one is connected to the LAN and the other is connected to the Internet through NAT so bind is not faced directly to the Internet. Everything seems to work fine, clients can do both forward and reverse lookups but somethings seems strange. On the server if i try to ping www.google.com one time, a great amount of network activity is genereated, alot more that one would expect so i decided to sniff the traffic with tcpdump. When loading the dump into Wireshark i can see about 250 entries with "Standard query A" and "Standard query response" Here a some of the entries from the dump DNS Standard query A www.google.com DNS Standard query A blackhole-1.iana.org DNS Standard query A blackhole-2.iana.org DNS Standard query response DNS Standard query A ns2.isc-sns.com DNS Standard query A ns1.isc-sns.net DNS Standard query A ns3.isc-sns.info DNS Standard query response PTR b.iana-servers.net RRSIG DNS Standard query A auth2.dns.cogentco.com DNS Standard query A ns1.crsnic.net DNS Standard query A ns2.nsiregistry.net DNS Standard query A ns3.verisign-grs.net DNS Standard query A ns4.verisign-grs.net DNS Standard query PTR 79.52.19.199.in-addr.arpa I do not have too much experince with DNS yet, but i am pretty sure that something is wrong. Anybody that have an idea of whats is going on?

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Query doesn't use a covering-index when applicable

    - by Dor
    I've downloaded the employees database and executed some queries for benchmarking purposes. Then I noticed that one query didn't use a covering index, although there was a corresponding index that I created earlier. Only when I added a FORCE INDEX clause to the query, it used a covering index. I've uploaded two files, one is the executed SQL queries and the other is the results. Can you tell why the query uses a covering-index only when a FORCE INDEX clause is added? The EXPLAIN shows that in both cases, the index dept_no_from_date_idx is being used anyway. To adapt myself to the standards of SO, I'm also writing the content of the two files here: The SQL queries: USE employees; /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date); /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp"; DESCRIBE dept_emp; SHOW KEYS IN dept_emp; /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); The results: -------------- /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date) -------------- Query OK, 331603 rows affected (33.95 sec) Records: 331603 Duplicates: 0 Warnings: 0 -------------- /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp" -------------- +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | dept_emp | InnoDB | 10 | Compact | 331883 | 36 | 12075008 | 0 | 21544960 | 29360128 | NULL | 2010-05-04 13:07:49 | NULL | NULL | utf8_general_ci | NULL | | | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ 1 row in set (0.47 sec) -------------- DESCRIBE dept_emp -------------- +-----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+-------+ | emp_no | int(11) | NO | PRI | NULL | | | dept_no | char(4) | NO | PRI | NULL | | | from_date | date | NO | | NULL | | | to_date | date | NO | | NULL | | +-----------+---------+------+-----+---------+-------+ 4 rows in set (0.05 sec) -------------- SHOW KEYS IN dept_emp -------------- +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | dept_emp | 0 | PRIMARY | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 0 | PRIMARY | 2 | dept_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | emp_no | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no | 1 | dept_no | A | 7 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 1 | dept_no | A | 13 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 2 | from_date | A | 165941 | NULL | NULL | | BTREE | | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 6 rows in set (0.23 sec) -------------- /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no,dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 21402 | Using where | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ 3 rows in set (0.09 sec) -------------- /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 37468 | Using where; Using index | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ 3 rows in set (0.05 sec) Bye

    Read the article

  • Mysql - help me optimize this query (improved question)

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) AS key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) AS key_2_total_matches, COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published', CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN tags AS t ON ( t.id_tag = ttagrels.id_tag ) OR ( t.id_tag = lptagrels.id_tag ) OR ( t.id_tag = wtagrels.id_tag ) WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( t.id_tag = lptagrels.id_tag ) AND ( lp.id_lp 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( t.id_tag = wtagrels.id_tag ) AND ( wc.id_wc 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed. The tag field of tags table is indexed. Is there something faulty with the query? What can be the reason behind 20+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query.jpg

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • Please help fix and optimize this query

    - by user607217
    I am working on a system to find potential duplicates in our customers table (SQL 2005). I am using the built-in SOUNDEX value that our software computes when customers are added/updated, but I also implemented the double metaphone algorithm for better matching. This is the most-nested query I have created, and I can't help but think there is a better way to do it and I'd like to learn. In the inner-most query I am joining the customer table to the metaphone table I created, then finding customers that have identical pKey (primary phonetic key). I take that, union that with customers that have matching soundex values, and then proceed to score those matches with various text similarity functions. This is currently working, but I would also like to add a union of customers whose aKey (alternate phonetic key) match. This would be identical to "QUERY A" except to substitute on (c1Akey = c2Akey) for the join. However, when I attempt to include that, I get errors when I try to execute my query. Here is the code: --Create aggregate ranking select c1Name, c2Name, nDiff, c1Addr, c2Addr, aDiff, c1SSN, c2SSN, sDiff, c1DOB, c2DOB, dDiff, nDiff+aDiff+dDiff+sDiff as Score ,(sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 as [Rank] FROM ( --Create match scores for different fields SELECT c1Name, c2Name, c1Addr, c2Addr, c1SSN, c2SSN, c1LTD, c2LTD, c1DOB, c2DOB, dbo.Jaro(c1name, c2name) AS nDiff, dbo.JaroWinkler(c1addr, c2addr) AS aDiff, CASE WHEN c1dob = '1901-01-01' OR c2dob = '1901-01-01' OR c1dob = '1800-01-01' OR c2dob = '1800-01-01' THEN .5 ELSE dbo.SmithWaterman(c1dob, c2dob) END AS dDiff, CASE WHEN c1ssn = '000-00-0000' OR c2ssn = '000-00-0000' THEN .5 ELSE dbo.Jaro(c1ssn, c2ssn) END AS sDiff FROM -- Generate list of possible matches based on multiple phonetic matching fields ( select * from -- List of similar names from pKey field of ##Metaphone table --QUERY A BEGIN (select customers.custno as c1Custno, name as c1Name, haddr as c1Addr, ssn as c1SSN, lasttripdate as c1LTD, dob as c1DOB, soundex as c1Soundex, pkey as c1Pkey, akey as c1Akey from Customers WITH (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c1 JOIN (select customers.custno as c2Custno, name as c2Name, haddr as c2Addr, ssn as c2SSN, lasttripdate as c2LTD, dob as c2DOB, soundex as c2Soundex, pkey as c2Pkey, akey as c2Akey from Customers with (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c2 on (c1Pkey = c2Pkey) and (c1Custno < c2Custno) WHERE (c1Name <> 'PARENT, GUARDIAN') and c1soundex != c2soundex --QUERY A END union --List of similar names from pregenerated SOUNDEX field (select t1.custno, t1.name, t1.haddr, t1.ssn, t1.lasttripdate, t1.dob, t1.[soundex], 0, 0, t2.custno, t2.name, t2.haddr, t2.ssn, t2.lasttripdate, t2.dob, t2.[soundex], 0, 0 from Customers t1 WITH (nolock) join customers t2 with (nolock) on t1.[soundex] = t2.[soundex] and t1.custno < t2.custno where (t1.name <> 'PARENT, GUARDIAN')) ) as a ) as b where (sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 >= 7.5 order by [rank] desc, score desc Previously, I was using joins such as on c1.pkey = c2.pkey or c1.akey = c2.akey or c1.soundex = c2.soundex but the performance was horrendous, and using unions seems to be working a lot better. Out of 103K customers, tt is currently generating a list of 8.5M potential matches (based on the phonetic codes) in 2.25 minutes, and then taking another 2 to score, rank and filter those down to about 3000. So I am happy with the performance, I just can't help but think there is a better way to structure this, and I need help adding the extra union condition. Thanks!

    Read the article

  • Grails - ElasticSearch - QueryParsingException[[index] No query registered for [query]]; with elasticSearchHelper; JSON via curl works fine though

    - by v1p
    I have been working on a Grails project, clubbed with ElasticSearch ( v 20.6 ), with a custom build of elasticsearch-grails-plugin(to support geo_point indexing : v.20.6) have been trying to do a filtered Search, while using script_fields (to calculate distance). Following is Closure & the generated JSON from the GXContentBuilder : Closure records = Domain.search(searchType:'dfs_query_and_fetch'){ query { filtered = { query = { if(queryTxt){ query_string(query: queryTxt) }else{ match_all {} } } filter = { geo_distance = { distance = "${userDistance}km" "location"{ lat = latlon[0]?:0.00 lon = latlon[1]?:0.00 } } } } } script_fields = { distance = { script = "doc['location'].arcDistanceInKm($latlon)" } } fields = ["_source"] } GXContentBuilder generated query JSON : { "query": { "filtered": { "query": { "match_all": {} }, "filter": { "geo_distance": { "distance": "5km", "location": { "lat": "37.752258", "lon": "-121.949886" } } } } }, "script_fields": { "distance": { "script": "doc['location'].arcDistanceInKm(37.752258, -121.949886)" } }, "fields": ["_source"] } The JSON query, using curl-way, works perfectly fine. But when I try to execute it from Groovy Code, I mean with this : elasticSearchHelper.withElasticSearch { Client client -> def response = client.search(request).actionGet() } It throws following error : Failed to execute phase [dfs], total failure; shardFailures {[1][index][3]: SearchParseException[[index][3]: from[0],size[60]: Parse Failure [Failed to parse source [{"from":0,"size":60,"query_binary":"eyJxdWVyeSI6eyJmaWx0ZXJlZCI6eyJxdWVyeSI6eyJtYXRjaF9hbGwiOnt9fSwiZmlsdGVyIjp7Imdlb19kaXN0YW5jZSI6eyJkaXN0YW5jZSI6IjVrbSIsImNvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiI6eyJsYXQiOiIzNy43NTIyNTgiLCJsb24iOiItMTIxLjk0OTg4NiJ9fX19fSwic2NyaXB0X2ZpZWxkcyI6eyJkaXN0YW5jZSI6eyJzY3JpcHQiOiJkb2NbJ2NvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiddLmFyY0Rpc3RhbmNlSW5LbSgzNy43NTIyNTgsIC0xMjEuOTQ5ODg2KSJ9fSwiZmllbGRzIjpbIl9zb3VyY2UiXX0=","explain":true}]]]; nested: QueryParsingException[[index] No query registered for [query]]; } The above Closure works if I only use filtered = { ... } script_fields = { ... } but it doesn't return the calculated distance. Anyone had any similar problem ? Thanks in advance :) It's possible that I might have been dim to point out the obvious here :P

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • Geolocation SQL query not finding exact location

    - by Iridium52
    I have been testing my geolocation query for some time now and I haven't found any issues with it until now. I am trying to search for all cities within a given radius, often times I'm searching for cities surrounding a city using that city's coords, but recently I tried searching around a city and found that the city itself was not returned. I have these cities as an excerpt in my database: city latitude longitude Saint-Mathieu 45.316708 -73.516253 Saint-Édouard 45.233374 -73.516254 Saint-Michel 45.233374 -73.566256 Saint-Rémi 45.266708 -73.616257 But when I run my query around the city of Saint-Rémi, with the following query... SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate((degrees(acos( sin(radians(tblcity.latitude)) * sin(radians(45.266708)) + cos(radians(tblcity.latitude)) * cos(radians(45.266708)) * cos(radians(tblcity.longitude - -73.616257) ) ) ) * 69.09*1.6),1) as distance FROM tblcity HAVING distance < 10 ORDER BY distance desc I get these results: city latitude longitude distance Saint-Mathieu 45.316708 -73.516253 9.5 Saint-Édouard 45.233374 -73.516254 8.6 Saint-Michel 45.233374 -73.566256 5.3 The town of Saint-Rémi is missing from the search. So I tried a modified query hoping to get a better result: SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate(( 6371 * acos( cos( radians( 45.266708 ) ) * cos( radians( tblcity.latitude ) ) * cos( radians( tblcity.longitude ) - radians( -73.616257 ) ) + sin( radians( 45.266708 ) ) * sin( radians( tblcity.latitude ) ) ) ),1) AS distance FROM tblcity HAVING distance < 10 ORDER BY distance desc But I get the same result... However, if I modify Saint-Rémi's coords slighly by changing the last digit of the lat or long by 1, both queries will return Saint-Rémi. Also, if I center the query on any of the other cities above, the searched city is returned in the results. Can anyone shed some light on what may be causing my queries above to not display the searched city of Saint-Rémi? I have added a sample of the table (with extra fields removed) below. I'm using MySQL 5.0.45, thanks in advance. CREATE TABLE `tblcity` ( `IDCity` int(1) NOT NULL auto_increment, `City` varchar(155) NOT NULL default '', `Latitude` decimal(9,6) NOT NULL default '0.000000', `Longitude` decimal(9,6) NOT NULL default '0.000000', PRIMARY KEY (`IDCity`) ) ENGINE=MyISAM AUTO_INCREMENT=52743 DEFAULT CHARSET=latin1 AUTO_INCREMENT=52743; INSERT INTO `tblcity` (`city`, `latitude`, `longitude`) VALUES ('Saint-Mathieu', 45.316708, -73.516253), ('Saint-Édouard', 45.233374, -73.516254), ('Saint-Michel', 45.233374, -73.566256), ('Saint-Rémi', 45.266708, -73.616257);

    Read the article

  • Advanced Django query with subselects and custom JOINS

    - by Bryan Ward
    I have been investigating this number theoretic function (found in the Height model) and I need to query for things based on the prime factorization of the primary key, or id. I have created a model for Factors of the id which maintains all of the prime factors. class Height(models.Model): b = models.IntegerField(null=True, blank=True) c = models.IntegerField(null=True, blank=True) d = models.FloatField(null=True, blank=True) class Factors(models.Model): height = models.ForeignKey(Height, null=True, blank=True) factor = models.IntegerField(null=True, blank=True) degree = models.IntegerField(null=True, blank=True) prime_id = models.IntegerField(null=True, blank=True) For example, if id=24, then the associated entries in the factors table would be height_id=24,factor=2,degree=3,prime_id=0 height_id=24,factor=3,degree=1,prime_id=1 the prime_id keep track of the relative order of the primes. Now let p < q < r < s all be prime numbers and a,b,c,d be positive integers. Then I want to be able to query for all Heights of the form id=(p**a)*(q**b)*(r**c)*(s**d). Now this is simple in the case that all of p,q,r,s,a,b,c,d are known in that I can just run Height.objects.get(id=(p**a)*(q**b)*(r**c)*(s**d)) But I need to be able to query for something like (2**a)*(3**2)*(r**c)*(s**d) where r,s,a,d are unknown and all Heights of such form will be returned. Furthermore, not all of the rows in Height will have exactly four prime factors, so I need to make sure that I am not matching rows of the form id=(p**a)*(q**b)*(r**c)*(s**d)*(t**e)... From what I can tell, the following MySQL query accomplishes this, but I would like to do it through the Django ORM. I also don't know if this MySQL query is the proper way to go about doing things. SELECT h.*,count(f.height_id) AS factorsCount FROM height AS h LEFT JOIN factors AS f ON ( f.height_id = h.id AND f.height_id IN (SELECT height_id FROM factors where prime_id=1 AND factor=2 AND degree=1) AND f.height_id IN (SELECT height_id FROM factors where prime_id=2 AND factor=3 AND degree=2) AND f.height_id IN (SELECT height_id FROM factors where prime_id=3 AND factor=5 AND degree=1) AND f.height_id IN (SELECT height_id FROM factors where prime_id=4 AND factor=7 ANd degree=1) ) GROUP BY h.id HAVING factorsCount=4 ORDER BY h.id; Any ideas or suggestions for things to try?

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Rails scalar query

    - by Craig
    I need to display a UI element (e.g. a star or checkmark) for employees that are 'favorites' of the current user (another employee). The Employee model has the following relationship defined to support this: has_and_belongs_to_many :favorites, :class_name => "Employee", :join_table => "favorites", :association_foreign_key => "favorite_id", :foreign_key => "employee_id" The favorites has two fields: employee_id, favorite_id. If I were to write SQL, the following query would give me the results that I want: SELECT id, account, IF( ( SELECT favorite_id FROM favorites WHERE favorite_id=p.id AND employee_id = ? ) IS NULL, FALSE, TRUE) isFavorite FROM employees Where the '?' would be replaced by the session[:user_id]. How do I represent the isFavorite scalar query in Rails? Another approach would use a query like this: SELECT id, account, IF(favorite_id IS NULL, FALSE, TRUE) isFavorite FROM employees e LEFT OUTER JOIN favorites f ON e.id=f.favorite_id AND employee_id = ? Again, the '?' is replaced by the session[:user_id] value. I've had some success writing this in Rails: ee=Employee.find(:all, :joins=>"LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=1", :select=>"employees.*,favorites.favorite_id") Unfortunately, when I try to make this query 'dynamic' by replacing the '1' with a '?', I get errors. ee=Employee.find(:all, :joins=>["LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?",1], :select=>"employees.*,favorites.favorite_id") Obviously, I have the syntax wrong, but can :joins expressions be 'dynamic'? Is this a case for a Lambda expression? I do hope to add other filters to this query and use it with will_paginate and acts_as_taggable_on, if that makes a difference. edit errors from trying to make :joins dynamic: ActiveRecord::ConfigurationError: Association named 'LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?' was not found; perhaps you misspelled it? from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1906:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1911:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `each' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1830:in `initialize' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `new' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `add_joins!' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1686:in `construct_finder_sql' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1548:in `find_every' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:615:in `find'

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • Linq to XML query returining a list of strings

    - by objectsbinding
    Hi all, I have the following class public class CountrySpecificPIIEntity { public string Country { get; set; } public string CreditCardType { get; set; } public String Language { get; set; } public List<String> PIIData { get; set; } } I have the following XML that I want to query using Linq-to-xml and shape in to the class above <?xml version="1.0" encoding="utf-8" ?> <piisettings> <piifilter country="DE" creditcardype="37" language="ALL" > <filters> <filter>FIRSTNAME</filter> <filter>SURNAME</filter> <filter>STREET</filter> <filter>ADDITIONALADDRESSINFO</filter> <filter>ZIP</filter> <filter>CITY</filter> <filter>STATE</filter> <filter>EMAIL</filter> </filters> </piifilter> <piifilter country="DE" creditcardype="37" Language="en" > <filters> <filter>EMAIL</filter> </filters> </piifilter> </piisettings> I'm using the following query but I'm having trouble with the last attribute i.e. PIIList. var query = from pii in xmlDoc.Descendants("piifilter") select new CountrySpecificPIIEntity { Country = pii.Attribut("country").Value, CreditCardType = pii.Attribute("creditcardype").Value, Language = pii.Attribute("Language").Value, PIIList = (List<string>)pii.Elements("filters") }; foreach (var entity in query) { Debug.Write(entity.Country); Debug.Write(entity.CreditCardType); Debug.Write(entity.Language); Debug.Write(entity.PIIList); } How Can I get the query to return a List to be hydrated in to the PIIList property.

    Read the article

  • MySQL Database Query - Codeigniter

    - by user2450349
    I am building an application with Codeigniter and need some help with a DB query. I have a table called users with the following fields: user_id, user_name, user_password, user_email, user_role, user_manager_id In my app, I pull all records from the user table using the following: function get_clients() { $this->db->select('*'); $this->db->where('user_role', 'client'); $this->db->order_by("user_name", "Asc"); $query = $this->db->get("users"); return $query->result_array(); } This works as expected, however when I display the results in the view, I also want to display a new column called Manager which will display the managers user_name field. The user_manager_id is the id of the user from the same table. Im guessing you can create an outer join on the same table but not sure. In the view, I am displaying the returned info as follows: <table class="table table-striped" id="zero-configuration"> <thead> <tr> <th>Name</th> <th>Email</th> <th>Manager</th> </tr> </thead> <tbody> <?php foreach($clients as $row) { ?> <tr> <td><?php echo $row['user_name']; ?> (<?php echo $row['user_username']; ?>)</td> <td><?php echo $row['user_email']; ?></td> <td><?php echo $row['???']; ?></td> </tr> <?php } ?> </tbody> </table> Any idea of how I can form the query and display the manager name is the view? Example: user_id user_name user_password user_email user_role user_manager_id 1 Ollie adjjk34jcd [email protected] client null 2 James djklsdfsdjk [email protected] client 1 When i query the database, i want to display results like this: Ollie [email protected] James [email protected] Ollie

    Read the article

  • sharp architecture, FluentNHibernate, automapper, DTO, 1:m persistence question

    - by csetzkorn
    Let us say we have a class A which has a reference to another class B (1:m) public class A { public virtual B B { get; set; } } I reflect this using FluentNHibernate within the sharp architecture and also manage to ‘initialise’ A and its B via a DTO and Automapper. The DTO contain A’s values and B (just B’s id value/A’s foreign key initialised). I was hoping that I can persist A by ‘just’ using its 'out of the box repository' without requiring B’s repository using: SaveOrUpdate(A); (A has been mapped using AutoMapper and contains B with its Id initialised) Was my assumption too naive? Can I achieve this somehow (without ever requiring B’s repository)? Or do I have to use A’s and B’s repository in, for example, the controller or some other service layer? Thanks. Best wishes, Christian

    Read the article

  • Diffence between FQL query and Graph API object access

    - by jwynveen
    What's the difference between accessing user data with the Facebook Graph API (http://graph.facebook.com/btaylor) and using the Graph API to make a FQL query of the same user (https://api.facebook.com/method/fql.query?query=QUERY). Also, does anyone know which of them the Facebook Developer Toolkit (for ASP.NET) uses? The reason I ask is because I'm trying to access the logged in user's birthday after they begin a Facebook Connect session on my site, but when I use the toolkit it doesn't return it. However, if I make a manual call to the Graph API for that user object, it does return it. It's possible I might have something wrong with my call from the toolkit. I think I may need to include the session key, but I'm not sure how to get it. Here's the code I'm using: _connectSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); try { if (!_connectSession.IsConnected()) { // Not authenticated, proceed as usual. statusResponse = "Please sign-in with Facebook."; } else { // Authenticated, create API instance _facebookAPI = new Api(_connectSession); // Load user user user = _facebookAPI.Users.GetInfo(); statusResponse = user.ToString(); ViewData["fb_user"] = user; } } catch (Exception ex) { //An error happened, so disconnect session _connectSession.Logout(); statusResponse = "Please sign-in with Facebook."; }

    Read the article

  • C# WCF Server retrieves 'List<T>' with 1 entry, but client doesn't receive it?! Please help Urgentl

    - by Neville
    Hi Everyone, I've been battling and trying to research this issue for over 2 days now with absolutely no luck. I am trying to retrieve a list of clients from the server (server using fluentNHibernate). The client object is as follow: [DataContract] //[KnownType(typeof(System.Collections.Generic.List<ContactPerson>))] //[KnownType(typeof(System.Collections.Generic.List<Address>))] //[KnownType(typeof(System.Collections.Generic.List<BatchRequest>))] //[KnownType(typeof(System.Collections.Generic.List<Discount>))] [KnownType(typeof(EClientType))] [KnownType(typeof(EComType))] public class Client { #region Properties [DataMember] public virtual int ClientID { get; set; } [DataMember] public virtual EClientType ClientType { get; set; } [DataMember] public virtual string RegisterID {get; set;} [DataMember] public virtual string HerdCode { get; set; } [DataMember] public virtual string CompanyName { get; set; } [DataMember] public virtual bool InvoicePerBatch { get; set; } [DataMember] public virtual EComType ResultsComType { get; set; } [DataMember] public virtual EComType InvoiceComType { get; set; } //[DataMember] //public virtual IList<ContactPerson> Contacts { get; set; } //[DataMember] //public virtual IList<Address> Addresses { get; set; } //[DataMember] //public virtual IList<BatchRequest> Batches { get; set; } //[DataMember] //public virtual IList<Discount> Discounts { get; set; } #endregion #region Overrides public override bool Equals(object obj) { var other = obj as Client; if (other == null) return false; return other.GetHashCode() == this.GetHashCode(); } public override int GetHashCode() { return ClientID.GetHashCode() | ClientType.GetHashCode() | RegisterID.GetHashCode() | HerdCode.GetHashCode() | CompanyName.GetHashCode() | InvoicePerBatch.GetHashCode() | ResultsComType.GetHashCode() | InvoiceComType.GetHashCode();// | Contacts.GetHashCode() | //Addresses.GetHashCode() | Batches.GetHashCode() | Discounts.GetHashCode(); } #endregion } As you can see, I have allready tried to remove the sub-lists, though even with this simplified version of the client I still run into the propblem. my fluent mapping is: public class ClientMap : ClassMap<Client> { public ClientMap() { Table("Clients"); Id(p => p.ClientID); Map(p => p.ClientType).CustomType<EClientType>(); ; Map(p => p.RegisterID); Map(p => p.HerdCode); Map(p => p.CompanyName); Map(p => p.InvoicePerBatch); Map(p => p.ResultsComType).CustomType<EComType>(); Map(p => p.InvoiceComType).CustomType<EComType>(); //HasMany<ContactPerson>(p => p.Contacts) // .KeyColumns.Add("ContactPersonID") // .Inverse() // .Cascade.All(); //HasMany<Address>(p => p.Addresses) // .KeyColumns.Add("AddressID") // .Inverse() // .Cascade.All(); //HasMany<BatchRequest>(p => p.Batches) // .KeyColumns.Add("BatchID") // .Inverse() // .Cascade.All(); //HasMany<Discount>(p => p.Discounts) // .KeyColumns.Add("DiscountID") // .Inverse() // .Cascade.All(); } The client method, seen below, connects to the server. The server retrieves the list, and everything looks right in the object, still, when it returns, the client doesn't receive anything (it receive a List object, but with nothing in it. Herewith the calling method: public List<s.Client> GetClientList() { try { s.DataServiceClient svcClient = new s.DataServiceClient(); svcClient.Open(); List<s.Client> clients = new List<s.Client>(); clients = svcClient.GetClientList().ToList<s.Client>(); svcClient.Close(); //when receiving focus from server, the clients object has a count of 0 return clients; } catch (Exception e) { MessageBox.Show(e.Message); } return null; } and the server method: public IList<Client> GetClientList() { var clients = new List<Client>(); try { using (var session = SessionHelper.OpenSession()) { clients = session.Linq<Client>().Where(p => p.ClientID > 0).ToList<Client>(); } } catch (Exception e) { EventLog.WriteEntry("eCOWS.Data", e.Message); } return clients; //returns a list with 1 client in it } the server method interface is: [UseNetDataContractSerializer] [OperationContract] IList<Client> GetClientList(); for final references, here is my client app.config entries: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IDataService" listenBacklog="10" maxConnections="10" transferMode="Buffered" transactionProtocol="OleTransactions" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" receiveTimeout="00:10:00" sendTimeout="00:10:00"> <readerQuotas maxDepth="51200000" maxStringContentLength="51200000" maxArrayLength="51200000" maxBytesPerRead="51200000" maxNameTableCharCount="51200000" /> <security mode="Transport"/> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:9000/eCOWS/DataService" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IDataService" contract="eCowsDataService.IDataService" name="NetTcpBinding_IDataService" behaviorConfiguration="eCowsEndpointBehavior"> </endpoint> <endpoint address="MEX" binding="mexHttpBinding" contract="IMetadataExchange" /> </client> <behaviors> <endpointBehaviors> <behavior name="eCowsEndpointBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> and my server app.config: <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBinding" maxConnections="10" listenBacklog="10" transferMode="Buffered" transactionProtocol="OleTransactions" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" sendTimeout="00:10:00" receiveTimeout="00:10:00"> <readerQuotas maxDepth="51200000" maxStringContentLength="51200000" maxArrayLength="51200000" maxBytesPerRead="51200000" maxNameTableCharCount="51200000" /> <security mode="Transport"/> </binding> </netTcpBinding> </bindings> <services> <service name="eCows.Data.Services.DataService" behaviorConfiguration="eCowsServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:9001/eCOWS/" /> <add baseAddress="net.tcp://localhost:9000/eCOWS/" /> </baseAddresses> </host> <endpoint address="DataService" binding="netTcpBinding" contract="eCows.Data.Services.IDataService" behaviorConfiguration="eCowsEndpointBehaviour"> </endpoint> <endpoint address="MEX" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="eCowsEndpointBehaviour"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="eCowsServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceThrottling maxConcurrentCalls="10" maxConcurrentSessions="10"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> <behavior name="MexBehaviour"> <serviceMetadata /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> I use to run into "socket closed / network or timeout" errors, and the trace showed clearly that on the callback it was looking for a listening endpoint, but couldn't find one. Anyway, after adding the UseNetSerializer that error went away, yet now I'm just not getting anything. Oh PS. if I add all the commented out List items, I still retrieve an entry from the DB, but also still not receive anything on the client. if I remove the [UseNetDataContractSerializer] I get the following error(s) in the svclog : WARNING: Description Faulted System.ServiceModel.Channels.ServerSessionPreambleConnectionReader+ServerFramingDuplexSessionChannel WARNING: Description Faulted System.ServiceModel.Channels.ServiceChannel ERROR: Initializing[eCows.Data.Models.Client#3]-failed to lazily initialize a collection of role: eCows.Data.Models.Client.Addresses, no session or session was closed ... ERROR: Could not find default endpoint element that references contract 'ILogbookManager' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. If I add a .Not.LazyLoad to the List mapping items, I'm back at not receiving errors, but also not receiving any client information.. Sigh! Please, if anyone can help with this I'd be extremely grateful. I'm probably just missing something small.. but... what is it :) hehe. Thanks in advance! Neville

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >