Search Results

Search found 25852 results on 1035 pages for 'linq query syntax'.

Page 408/1035 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Load only some columns with Hibernate native SQL queries

    - by Alessandro Dionisi
    I have a table on the database and I want to load only some columns from the result set. I defined a native sql query in the hbm file: <sql-query name="query"> <return alias="r" class="RawData"/> <![CDATA[ SELECT DESCRIPTION as {r.description} FROM RAWD_RAWDATAS r WHERE r.RAWDATA_ID=? ]]> </sql-query> This query however fails with error: could not read column value from result set: RAWDATA1_14_0_; Invalid column name SQL Error: 17006, SQLState: null, because Hibernate tries to load all fields from the result set. I found also a bug in Hibernate JIRA (http://opensource.atlassian.com/projects/hibernate/browse/HHH-3035). Anyone knows how to accomplish this task with a workaround?

    Read the article

  • Appengine backreferences - need composite index?

    - by davezor
    I have a query that is very recently starting to throw: "The built-in indices are not efficient enough for this query and your data. Please add a composite index for this query." I checked the line on which this exception is being thrown, and the problem query is this one: count = self.vote_set.filter("direction =", 1).count() This is literally a one-filter operation using appengine's built-in backreferences. I have no idea how to optimize this query...anyone have any suggestions? I tried to add this index: - kind: Vote properties: - name: direction direction: desc - kind: Vote properties: - name: direction And I got a message (obviously) saying this was an unnecessary index. Thanks for your help in advance.

    Read the article

  • How to load an entity by a key other than primary key?

    - by stacker
    In a customized servlet (seam 2.1.2) this works fine TableNameHome tableNameHome = (TableNameHome) Component.getInstance( "tableNameHome " ); tableName entity = tableNameHome.getInstance(); entity.setXXX(); tableNameHome.persit(); However this one fails: entityManager = tableNameHome .getEntityManager(); Query query = entityManager.createQuery( "SELECT b FROM tablename b WHERE b.box_id = :key2nd" ); query.setParameter( "key2nd", value); List results = query.getResultList(); and leads to this error message: org.hibernate.hql.ast.QuerySyntaxException: tablename is not mapped [SELECT b FROM tablename b WHERE b.key2nd = :key2nd] In EJB 2.1 I could implement other finder-methods. EntityHome.find() searches only by primary key. What do I need to do in order to query by a different criteria than primary key?

    Read the article

  • SQLAlchemy - loading user by username

    - by keithjgrant
    Just diving into pylons here, and am trying to get my head around the basics of SQLALchemy. I have figured out how to load a record by id: user_q = session.query(model.User) user = user_q.get(user_id) But how do I query by a specific field (i.e. username)? I assume there is a quick way to do it with the model rather than hand-building the query. I think it has something with the add_column() function on the query object, but I can't quite figure out how to use it. I've been trying stuff like this, but obviously it doesn't work: user_q = meta.Session.query(model.User).add_column('username'=user_name) user = user_q.get()

    Read the article

  • How do I guarantee row uniqueness in MySQL without the use of a UNIQUE constraint?

    - by MalcomTucker
    Hi I have some fairly simple requirements but I'm not sure how I implement them: I have multiple concurrent threads running the same query The query supplies a 'string' value - if it exists in the table, the query should return the id of the matching row, if not the query should insert the 'string' value and return the last inserted id The 'string' column is (and must be) a text column (it's bigger than varchar 255) so I cannot set it as unique - uniqueness must be enforced through the access mechanism The query needs to be in stored procedure format (which doesnt support table locks in MySQL) How can I guarantee that 'string' is unique? How can I prevent other threads writing to the table after another thread has read it and found no matching 'string' item? Thanks for any advice..

    Read the article

  • Placing the where condition

    - by user182944
    I came up with the below query: SELECT ROOMNO,BUILDINGNO FROM MRM_ROOM_DETAILS WHERE ROOMID IN ( SELECT distinct roomid FROM MRM_BOOKING_DETAILS WHERE (CHECKIN NOT BETWEEN '2012-04-13 09:50:00' AND '2012-04-13 10:20:00') AND (CHECKOUT NOT BETWEEN '2012-04-13 09:50:00' AND '2012-04-13 10:20:00')) AND CAPACITY > 15 AND PROJECTIONSTATUS = 'NO'; I need to place this query in the method SQLiteDatabase.query() and fetch the rows accordingly. I am not able to understand how to place this big where condition (which contains a sub-query as well) in place of the "String selection" i.e. 3rd parameter of the method. Shall i simple write the entire where part(including the sub-query) as a string in the 3rd parameter or else there is some other better way for doing the same? Please suggest me the best way to do the same. Regards,

    Read the article

  • The case of the mysterious MySQL caching across restarts

    - by shanusmagnus
    I found a very slow MySQL query in my web app. The weird thing is that the query is only slow the first time it's executed, despite the fact that the query_cache is set to its default (query_cache_size 0) like so: mysql> show variables like 'query%'; +------------------------------+---------+ | Variable_name | Value | +------------------------------+---------+ | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | +------------------------------+---------+ The even weirder thing is that this speedup persists even after the MySQL server has been stopped and restarted (I'm using OSX, and perform this restart using the system preferences pane.) The only way I can re-create the poor performance of the initial query is by rebooting the system. So my question is: how is this happening? Obviously some sort of caching at work, but where? And how does it persist across database restarts? This query is mediated through our web app, which comes via PHP/Apache, but there are no extra bells and whistles, and the curious caching also persists across Apache restarts. Help?

    Read the article

  • Error in PHP with Mysql

    - by maltad
    Hello, Im starting to learn PHP. When I run the script it had an error that said: "Assigned Employee:resource(6) of type (mysql result)" . Please help me and sorry for my bad English Here is the code: include_once 'rnheader.php'; include_once 'rnfunctions.php'; echo ''; echo ' Assigned Employee:'; $query = "SELECT UserName FROM employee where Classification_ClassificationID = '2'"; $result = queryMysql($query); if (!queryMysql($query)) { echo "Query fail: $query" . mysql_error() . ""; } else { var_dump($result); exit; echo ''; // or name="toinsert[]" while ($row = mysqli_fetch_array($result)) { echo '' . htmlspecialchars($row['UserName']) . ''; } } echo ''; ?

    Read the article

  • Finding Most Recent Order for a Particular Item

    - by visitor
    I'm trying to write a SQL Query for DB2 Version 8 which retrieves the most recent order of a specific part for a list of users. The query receives a parameter which contains a list of customerId numbers and the partId number. For example, Order Table OrderID PartID CustomerID OrderTime I initially wanted to try: Select * from Order where OrderId = ( Select orderId from Order where partId = #requestedPartId# and customerId = #customerId# Order by orderTime desc fetch first 1 rows only ); The problem with the above query is that it only works for a single user and my query needs to include multiple users. Does anyone have a suggestion about how I could expand the above query to work for multiple users? If I remove my "fetch first 1 rows only," then it will return all rows instead of the most recent. I also tried using Max(OrderTime), but I couldn't find a way to return the OrderId from the sub-select. Thanks! Note: DB2 Version 8 does not support the SQL "TOP" function.

    Read the article

  • How to predict result set row count?

    - by Saurabh Kumar
    I have an application where I create a big SQL query dynamically for SQL server 2008. This query is based on various search criteria which the user might give such as search by lastname, firstname, ssn etc. The requirement is that if the user gives a condition due to which the formed query might return a lot of rows(configurable for max N rows), then the application must send back a message instead to the user saying that he needs to refine his search query as the existing query will return too many rows. I would not want to bring back say, 5000 rows to the client and then discard that data just to show the user an error. What is an efficient way to tackle this issue?

    Read the article

  • (database) im trying to create a form in access 2007 with 2 drop down boxes to view a report by state or name

    - by jeff orris
    im an intern at a database mngmt company and the boss is training me in access...i took the access tutorials and were definitely not enough info involved to do a what seems a simple task.my problem is this: i have a simple table with contact info with 16 colums (Local_Utility, Requested_User_Type, First_Name, Last_Name, Address 1, Address 2, Country, State, City, Zip, Phone_Number, Username\Email, Password, Confirm Password, and Parcel_Number), with 6 rows of names (keep in mind this is just a test to help me from the boss) I created a form and with 2 drop down boxes (Last Name and State) and im trying to create a view button to view an individual report for a query i made for just simple contact info with 6 colums (Last_Name, First_Name, Address1, City, State, and Phone_Number) Problem1 is that i can view the query with the view by name or state button but cant view a simple individual report from the query using the button Problem2 is that for criteria on the query i put Forms!frmMyparamForm!txtMyStateParamField for the state drop box it works, but when i use Forms!frmMyparamForm!txtMyNameParamField it doesnt and that annoying parameter box pops up Problem3 is that after i close the query, all the states and names in my dropdown box on the form disappear Im a beginner at this please help me

    Read the article

  • sql server - how to execute tje second half of or only when first one fails

    - by fn79
    Suppose I have a table with following records value text company/about about Us company company company/contactus company contact I have a very simple query in sql server as below. I am having problem with the 'or' condition. In below query, I am trying to find text for value 'company/about'. If it is not found, then only I want to run the other side of 'or'. The below query returns two records as below value text company/about about Us company company Query select * from tbl where value='company/about' or value=substring('company/about',0,charindex('/','company/about')) How can I modify the query so the result set looks like value text company/about about Us

    Read the article

  • Rails 2.3.8 Compound condition

    - by Michael Guantonio
    I have a rails query that I would like to run. The only problem that I am having is the query structure. Essentially the query looks like this queryList = model.find(:all, :conditions => [id = "id"]) #returns a query list #here is the issue compound = otherModel.find(:first, :select => "an_id", :conditions => ["some_other_id=? and an_id=?, some_other_id, an_id]) Where an_id is actually a list of ids in the query list. How can I write that in rails to basically associate a single id to a list that may contain ids...

    Read the article

  • how do I refactor this to make single function calls?

    - by stack.user.1
    I've been using this for a while updating mysql as needed. However I'm not too sure on the syntax..and need to migrate the sql to an array. Particulary the line database::query("CREATE TABLE $name($query)"); Does this translate to CREATE TABLE bookmark(name VARCHAR(64), url VARCHAR(256), tag VARCHAR(256), id INT) This is my ...guess. Is this correct? class table extends database { private function create($name, $query) { database::query("CREATE TABLE $name($query)"); } public function make($type) { switch ($type) { case "credentials": self::create('credentials', 'id INT NOT NULL AUTO_INCREMENT, flname VARCHAR(60), email VARCHAR(32), pass VARCHAR(40), PRIMARY KEY(id)'); break; case "booomark": self::create('boomark', 'name VARCHAR(64), url VARCHAR(256), tag VARCHAR(256), id INT'); break; case "tweet": self::create('tweet', 'time INT, fname VARCHAR(32), message VARCHAR(128), email VARCHAR(64)'); break; default: throw new Exception('Invalid Table Type'); } } }

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • XML DB Content Connector unable to accept binary content due to Invalid argument(s) in call oracle.sql.BLOB.setBinaryStream(0L)

    - by sthieme
    Dear Readers, I am working on implementing a custom Document Management System using the Oracle XML DB Content Connector. See the following documentation link for details Oracle XML DB Developer's Guide 11g Release 2 (11.2)Chapter 31 Using Oracle XML DB Content Connectorhttp://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb_jcr.htm especially the following example gave me some trouble to run it successfully Sample Code to Upload Filehttp://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb_jcr.htm#ADXDB5627 I had already succeeded to set some of the properties successfully, i.e. jcr:encoding, jcr:mimeType, ojcr:displayName and ojcr:language. However setting the jcr:data property as described in the example failed consistently, both with the documented input FileStream or with a fixed string. contentNode.setProperty("jcr:data", "mystringvalue"); After some research I found the following Support Note which describes the cause for the issue in the JDBC driver version 11.2.0.1. Error "ORA-17068: Invalid argument(s) in call" Using Method setBinaryStream(0L) in JDBC 11.2.0.1 (Doc ID 1234235.1)https://support.oracle.com/epmos/faces/DocContentDisplay?id=1234235.1It can easily be solved by upgrading to JDBC 11.2.0.2 or worked around using the following property setting: java -Doracle.jdbc.LobStreamPosStandardCompliant=false ... Kind regards,Stefan C:\Oracle\Database\product\11.2.0\dbhome_1>java -Doracle.jdbc.LobStreamPosStandardCompliant=false UploadFile jdbc:oracle:oci:@localhost:1522:orcl XDB welcome1 /public MyFile.txt text/plain 19.08.2014 11:50:26 oracle.jcr.impl.OracleRepositoryImpl login INFO: JCR repository descriptors: query.xpath.pos.index = true option.versioning.supported = false jcr.repository.version = 11.1.0.0.0 option.observation.supported = false option.locking.supported = false oracle.jcr.framework.version = 11.1.0.0.0 query.xpath.doc.order = false jcr.specification.version = 1.0 jcr.repository.vendor = Oracle option.query.sql.supported = false jcr.specification.name = Content Repository for Java Technology API level.2.supported = true level.1.supported = true jcr.repository.name = XML DB Content Connector jcr.repository.vendor.url = http://www.oracle.com oracle.jcr.persistenceManagerFactory = oracle.jcr.impl.xdb.XDBPersistenceManagerFactory option.transactions.supported = false 19.08.2014 11:50:26 oracle.jcr.impl.OracleRepositoryImpl login INFO: Session Session-1 connected for user id XDB 19.08.2014 11:50:27 oracle.jcr.impl.OracleSessionImpl logout INFO: Session-1: logout instead of C:\Oracle\Database\product\11.2.0\dbhome_1>java UploadFile jdbc:oracle:oci:@localhost:1522:orcl XDB welcome1 /public MyFile.txt text/plain 19.08.2014 10:56:39 oracle.jcr.impl.OracleRepositoryImpl login INFO: JCR repository descriptors: query.xpath.pos.index = true option.versioning.supported = false jcr.repository.version = 11.1.0.0.0 option.observation.supported = false option.locking.supported = false oracle.jcr.framework.version = 11.1.0.0.0 query.xpath.doc.order = false jcr.specification.version = 1.0 jcr.repository.vendor = Oracle option.query.sql.supported = false jcr.specification.name = Content Repository for Java Technology API level.2.supported = true level.1.supported = true jcr.repository.name = XML DB Content Connector jcr.repository.vendor.url = http://www.oracle.com oracle.jcr.persistenceManagerFactory = oracle.jcr.impl.xdb.XDBPersistenceManagerFactory option.transactions.supported = false 19.08.2014 10:56:39 oracle.jcr.impl.OracleRepositoryImpl login INFO: Session Session-1 connected for user id XDB Exception in thread "main" javax.jcr.RepositoryException: Unable to accept binary content at oracle.jcr.impl.ExceptionFactory.repository(ExceptionFactory.java:142) at oracle.jcr.impl.ExceptionFactory.otherwiseFailed(ExceptionFactory.java:98) at oracle.jcr.impl.xdb.XDBPersistenceManager.acceptBinaryStream(XDBPersistenceManager.java:1421) at oracle.jcr.impl.xdb.XDBResource.setContent(XDBResource.java:898) at oracle.jcr.impl.ContentNode.setProperty(ContentNode.java:472) at oracle.jcr.impl.OracleNode.setProperty(OracleNode.java:1439) at oracle.jcr.impl.OracleNode.setProperty(OracleNode.java:460) at UploadFile.main(UploadFile.java:54) Caused by: java.sql.SQLException: Invalid argument(s) in call at oracle.jdbc.driver.T2CConnection.newOutputStream(T2CConnection.java:2392) at oracle.sql.BLOB.setBinaryStream(BLOB.java:893) at oracle.jcr.impl.xdb.XDBPersistenceManager.acceptBinaryStream(XDBPersistenceManager.java:1393) ... 5 more

    Read the article

  • Using rel=next and rel=prev with multiple sets of paginated content on the same page

    - by jakejgordon
    We are running into issues with trying to figure out how to implement rel="next" and rel="prev" -- coupled with rel="canonical" -- with multiple sets of paginated content on the same page, with pages in multiple cultures. In other words, how do we implement these when we have a pager for both Product Reviews and Questions and Answers (aka "Q&A") on the same page, with duplicate content across culture-specific URLs (e.g. /us/en/my-product vs. /ca/en/my-product)? Our current implementation will actually do a full postback when you click Page 2, and will add something to the query string (e.g. website.com/ca/en/my-product?previewpage=2 or website.com/ca/en/my-product?questionpage=2). If we only had one set of paginated content then the implementation would certainly be more straightforward. Adding a second set of paginated content (i.e. Q&A) complicates things. Let's assume that we want the United States English page to be the canonical target (i.e. /us/en/my-product) based on culture. If you go to the /ca/en/my-product page you'll have a rel="canonical" href="/us/en/my-product". So far so good. Let's also assume that we are not implementing a page that lists ALL Product Reviews and Q&A. This would likely solve a number of our problems by using rel="canonical" to this page, but is not an option for reasons that are out of scope for this discussion. Now if you click on page 2 of Product Reviews, it will reload the page with /ca/en/my-product?reviewpage=2 as the URL. Given this scenario, here are my questions: On page 2 of the my-product page on the Canadian site, should there be a rel="canonical" to /us/en/my-product?reviewpage=2 (assuming the content is identical in the United States and Canada)? Should the rel="prev" go to /ca/en/my-product?reviewpage=1 or should it go to /ca/en/my-product ? The query-string version would really only be accessible if using the pager and shows the exact same content as the base page. The following two questions are closely related to this one. Should the /ca/en/my-product?reviewpage=1 have a rel canonical directly to /us/en/my-product (United States page with nothing in query string) since the content is identical)? Given that Q&A content is also paginated, should there be a rel="next" on the base page without query string? In other words, should the /ca/en/my-product page have a rel="next" to /ca/en/my-product?reviewpage=2 AND rel="next" to /ca/en/my-product?questionpage=2 . So far as I can tell it doesn't make sense to have multiple rel="next" implementations on the same page. I suspect that the pages with query string values should have rel="next" and rel="prev" that only point to other pages with query strings and not to the base page. The ?reviewpage=1 and ?questionpage=1 pages would then just have a rel="canonical" to /us/en/my-product . Thoughts? I know this is a tough one -- that's why I brought it to this community. Thanks so much for your help in advance!

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • MVC2 DataAnnotations on ViewModel - ModelState.isValid Always Returns true

    - by ScottSEA
    I have an MVC2 Application that uses MVVM pattern. I am trying use Data Annotations to validate form input. In my ThingsController I have two methods: [HttpGet] public ActionResult Index() { return View(); } public ActionResult Details(ThingsViewModel tvm) { if (!ModelState.IsValid) return View(tvm); try { Query q = new Query(tvm.Query); ThingRepository repository = new ThingRepository(q); tvm.Airplanes = repository.All(); return View(tvm); } catch (Exception) { return View(); } } My Details.aspx view is strongly typed to the ThingsViewModel: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Config.Web.Models.ThingsViewModel>" %> The ViewModel is a class consisting of a IList of returned Thing objects and the Query string (which is submitted on the form) and has the Required data annotation: public class ThingsViewModel { public IList<Thing> Things{ get; set; } [Required(ErrorMessage="You must enter a query")] public string Query { get; set; } } When I run this, and click the submit button on the form without entering a value I get a YSOD with the following error: The model item passed into the dictionary is of type 'Config.Web.Models.ThingsViewModel', but this dictionary requires a model item of type System.Collections.Generic.IEnumerable`1[Config.Domain.Entities.Thing]'. How can I get Data Annotations to work with a ViewModel? I cannot see what I'm missing or where I'm going wrong - the VM was working just fine before I started mucking around with validation.

    Read the article

  • Sharepoint Lists.GetListItems Method rowLimit problem

    - by Linda
    In SharePoint I am using the default view of a list. When I use GetListItems method I can pass into it the following: public XmlNode GetListItems ( string listName, string viewName, XmlNode query, XmlNode viewFields, string rowLimit, XmlNode queryOptions, string webID ) I am passing in "" for the viewName and am passing a rowLimit of 1000. By Default view only returns 100 items. 100 Items are still being returned not 1000. Can you use the rowLimit when not specifying a view? Is it possible to bring back 1000 items using the query instead? I do not really want to use a GUID for the viewName as I would have to look it up for each list and perform a big refactor. Update I am now using the guid of the view and my list still returns the incorrect number of items. I know the guid is being used as I sued an incorrect one and it errord out. Any ideas what could be wrong? The code that is being sent to the service is as follows: <GetListItems xmlns='http://schemas.microsoft.com/sharepoint/soap/'> <listName>Media Outlet</listName> <viewName>{2822F0D9-A905-44B5-8913-34E6497F1AAF}</viewName> <query><Query><Where><Eq><FieldRef Name='Outlet_x0020_Type' /><Value Type='Lookup'></Value></Eq></Where><OrderBy><FieldRef Name='Title' /></OrderBy></Query></query> <ViewFields></ViewFields> <RowLimit>10000</RowLimit> <QueryOptions></QueryOptions> <webID></webID> </GetListItems>

    Read the article

  • Sharepoint GetListItems using rowLimit parameter is not limiting the results returned

    - by Linda
    In SharePoint I am using the default view of a list. When I use GetListItems method I can pass into it the following: public XmlNode GetListItems ( string listName, string viewName, XmlNode query, XmlNode viewFields, string rowLimit, XmlNode queryOptions, string webID ) I am passing in "" for the viewName and am passing a rowLimit of 1000. By Default view only returns 100 items. 100 Items are still being returned not 1000. Can you use the rowLimit when not specifying a view? Is it possible to bring back 1000 items using the query instead? I do not really want to use a GUID for the viewName as I would have to look it up for each list and perform a big refactor. Update I am now using the guid of the view and my list still returns the incorrect number of items. I know the guid is being used as I sued an incorrect one and it errord out. Any ideas what could be wrong? The code that is being sent to the service is as follows: <GetListItems xmlns='http://schemas.microsoft.com/sharepoint/soap/'> <listName>Media Outlet</listName> <viewName>{2822F0D9-A905-44B5-8913-34E6497F1AAF}</viewName> <query><Query><Where><Eq><FieldRef Name='Outlet_x0020_Type' /><Value Type='Lookup'></Value></Eq></Where><OrderBy><FieldRef Name='Title' /></OrderBy></Query></query> <ViewFields></ViewFields> <RowLimit>10000</RowLimit> <QueryOptions></QueryOptions> <webID></webID> </GetListItems>

    Read the article

  • Sphinx PHP search

    - by James
    I'm doing a Sphinx search but turning up some really weird results. Any help is appreciated. So for example if I type "50", I get: 50 Cent 50 Lions 50 Foot Wave, etc. This is great, but when I search "50 Ce", I get: Ryczace Dwudziestki Spisek Bernhard Gal Cowabunga Go-Go And other crazy results. Also when I search for "50 Cent", the correct result is at the top, but then random results below. Any ideas why? PHP code: $query = $_GET['query']; if (!empty($query)) { $sphinx->SetMatchMode(SPH_MATCH_ALL); $sphinx->AddQuery($query, 'artists'); $sphinx->AddQuery($query, 'variations'); $sphinx->SetFilter('name', array(3)); $sphinx->SetLimits(0, 10); $result = $sphinx->RunQueries(); echo '<pre>'; switch ($result) { case false: echo 'Query failed: ' . $sphinx->GetLastError() . "\n"; break; default: if ($sphinx->GetLastWarning()) { echo 'WARNING: ' . $sphinx->GetLastWarning() . "\n"; } if (is_array($result[0]['matches']) && count($result[0]['matches'])) { foreach ($result[0]['matches'] as $value => $info) { $artist = artistDetails($value); echo $artist['name'] . "\n"; } } } } Sphinx Index and Source: source artists { type = mysql sql_host = localhost sql_user = user sql_pass = pass sql_db = db sql_port = 3300 sql_query = \ SELECT \ id, name \ FROM artists; #UNIX_TIMESTAMP(time) #sql_attr_uint = group_id #sql_attr_timestamp = time sql_query_info = SELECT id,name FROM artists WHERE id=$id } index artists { source = artists path = /var/sphinx/artists docinfo = extern charset_type = utf-8 }

    Read the article

  • PyDev and Django: PyDev breaking Django shell?

    - by Rosarch
    I've set up a new project, and populated it with simple models. (Essentially I'm following the tut.) When I run python manage.py shell on the command line, it works fine: >python manage.py shell Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from mysite.myapp.models import School >>> School.objects.all() [] Works great. Then, I try to do the same thing in Eclipse (using a Django project that is composed of the same files.) Right click on mysite project Django Shell with Django environment This is the output from the PyDev Console: >>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version)) C:\Python26\python.exe 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] >>> >>> from django.core import management;import mysite.settings as settings;management.setup_environ(settings) 'path\\to\\mysite' >>> from mysite.myapp.models import School >>> School.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Python26\lib\site-packages\django\db\models\query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 238, in iterator for row in self.query.results_iter(): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 2368, in execute_sql cursor = self.connection.cursor() File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 81, in cursor cursor = self._cursor() File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 170, in _cursor self.connection = Database.connect(**kwargs) OperationalError: unable to open database file What am I doing wrong here?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >