Search Results

Search found 17240 results on 690 pages for 'query'.

Page 542/690 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • For SQL select returning more than 1 value, how are they sorted when Id is GUID?

    - by Chris F
    I'm wondering how MSSQL orders data that is returned from a query and the Id columns of the respective tables are all of type uniqueidentifier. I'm using NHibernate GuidComb when creating all of the GUIDs and do things like: Sheet sheet = sheetRepository.Get(_SheetGuid_); // has many lines items IList<SheetLineItem> lineItems = sheet.LineItems; I'm just trying to figure out how they'll be ordered when I do something like: foreach (SheetLineItem lineItem in lineItems) I can't see to find a good article on the way GUIDs are compared by SQL when being ordered, if that's what's happening.

    Read the article

  • Sharepoint FullTextQuery doesnt work

    - by user330309
    I have three scopes: scope A with rule include http: //mywebapp/lists/myList scope B with rule include FileExtenion aspx scope C with rule include http: //mywebapp/mysitecollection2/ some managed properties mapped to columns from myList, ows_contenttype, ows_created (Property1,2,3) select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'A' this returns 15 resullts select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'B' this returns 50 results select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'C' this returns 10 results so why this query select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'A' or scope='B' or scope='C' doesnt return 75 results ?? it returns 15 or some FullTextSqlQuery sqlQuery = new FullTextSqlQuery (site); sqlQuery .ResultTypes = ResultType .RelevantResults ; sqlQuery .QueryText = sql ; sqlQuery .TrimDuplicates = true; sqlQuery.EnableStemming=true; ResultTableCollection results = sqlQuery .Execute();

    Read the article

  • ASP.NET handling button click event before OnPreInit

    - by Phillykins
    Hello, I have a data access layer, a business logic layer and a presentation layer (ie. the pages themselves). I handle the OnPreInit event and populate collections required for the page. All the data comes from an SQL server database and I do not use caching. I handle a button click event to grab values from a form and insert a new object into the database. The problem is that by the time I handle the click event, the collections have already been populated, so the new item which has been inserted into the database has not been retrieved. What is the accepted solution to this? I could insert the new object directly into the collection and re-bind the GridView, but the SQL query selects only a set of objects and the new object could fall outside of this set. Thanks!

    Read the article

  • Streaming content from (sharepoint) web part

    - by Mikko Rantanen
    How does one stream files, html or custom AJAX responses from web parts? Our current quick-and-very-dirty solution is to make the web part call the current page with certain query parameters, which the web part checks and instead of performing normal load it writes the required things to output and calls response end. This sounds bad since SharePoint might load other web parts and execute their code before reaching our web part. The web part is configured with data source settings which means the streaming context must be specific to the web part so it can acquire the correct data source settings.

    Read the article

  • jQuery get Function Not Returning Data

    - by senfo
    I'm trying to use the jQuery get function to render the results of an HTML page in a div on my page. The result of the get function appears to be successful because the value of textStatus (in the following code block) is "success". The value of data, however, is always empty. $(document).ready(function() { $.get('http://www.google.com', function(data, textStatus){ $('#RSSContent').html(textStatus + ' ' + data); }); }); Any idea what I might be doing wrong? Note: The final code will query an RSS feed, which I plan on transforming and rendering as HTML in the RSSContent div. I simply used http://www.google.com for testing purposes.

    Read the article

  • Order database results by bayesian rating

    - by One Trick Pony
    I'm not sure this is even possible, but I need a confirmation before doing it the "ugly" way :) So, the "results" are posts inside a database which are stored like this: the posts table, which contains all the important stuff, like the ID, the title, the content the post meta table, which contains additional post data, like the rating (this_rating) and the number of votes (this_num_votes). This data is stored in pairs, the table has 3 columns: post ID / key / value. It's basically the WordPress table structure. What I want is to pull out the highest rated posts, sorted based on this formula: br = ( (avg_num_votes * avg_rating) + (this_num_votes * this_rating) ) / (avg_num_votes + this_num_votes) which I stole form here. avg_num_votes and avg_rating are known variables (they get updated on each vote), so they don't need to be calculated. Can this be done with a mysql query? Or do I need to get all the posts and do the sorting with PHP?

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • overlay text on some else's window - HUD

    - by taglius
    I am interested in writing an app that overlays a small heads up display (HUD) over another application, in VB.NET. Does anyone have an example of this? I will need to enumerate all open windows to find the window that I want, and then overlay some text in a specific position on the window. If the user moves that window, my text will need to follow. (I will probably be painting the text in a loop over and over). Edit: nobody answered my original query - I added C# to the keywords to see if any of gurus in that language might have an answer. thanks.

    Read the article

  • MySQL GROUP_CONCAT + IN() = missing data :-(

    - by Andrew Heath
    Example: Table: box boxID color 01 red 02 blue 03 green Table: boxHas boxID has 01 apple 01 pear 01 grapes 01 banana 02 lime 02 apple 02 pear 03 chihuahua 03 nachos 03 baby crocodile I want to query on the contents of each box, and return a table with each ID, color, and a column that concatenates the contents of each box, so I use: SELECT box.boxID, box.color, GROUP_CONCAT(DISTINCT boxHas.has SEPARATOR ", ") AS contents FROM box LEFT JOIN boxHas ON box.boxID=boxHas.boxID WHERE boxHas.has IN ('apple','pear') GROUP BY box.boxID ORDER BY box.boxID and I get the following table of results: boxID color contents 01 red apple, pear 02 blue apple, pear My question to you is: why isn't it listing ALL the has values in the contents column? Why is my WHERE statement also cropping my GROUP_CONCAT? The table I thought I was going to get is: boxID color contents 01 red apple, banana, grapes, pear 02 blue apple, lime, pear Although I want to limit my boxID results based upon the WHERE statement, I do not want to limit the contents field for valid boxes. :-/ Help?

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • algorithm advice for finding maximum items within a time period

    - by darren
    Hi everyone. I have a database schema that is similar to the following: | User | Event | Date |--------|---------------|------ | 111 | Walked dog | 2009-10-1 | 222 | Walked dog | 2009-10-2 | 333 | Fed Fish | 2009-10-5 | 222 | Did Laundry | 2009-10-6 | 111 | Fed Fish | 2009-10-7 | 111 | Walked dog | 2009-10-18 | 222 | Walked dog | 2009-10-19 | 111 | Fed Fish | 2009-10-21 I would like to produce a query that returns the maximum number of times a user performs some action within a time period. For example, given a time period of 5 days, what is the maximum number of times user 111 walked the dog? The most obvious solution would be to start at some zero point and move forward each day, summing up 5 day periods along the way, then taking the maximum total out of all the 5 day windows. the approach seems incredibly costly however. I would appreciate any suggestions you may have.

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • browser back acts on nested iframe before the page itself - is there a way to avoid it??

    - by kfiroo
    hi, i have a page with dynamic data loaded by some ajax and lots of javascript. the page contains a list from which the user can choose and each selected value loads new data to the page. one of these data items is a url provided to an iframe. i use jQuery BBQ: Back Button & Query Library to simulate the browser-back behavior. all works well besides the fact that when i click the back button for the first time the iframe goes back to its previous location and then i need to click back again to make the page go back. is there a way to disable the iframe's back behavior?

    Read the article

  • How can I change or remove HttpRequest input arguments in a HttpModule

    - by Eric Gunn
    Is it possible to change or remove http request form inputs in an httpmodule? My goal is to create a security IHttpmodule that will check the request for reasonable values, such as limits on acceptable input and query parameter length, or use the AntiXSS Sanitizer to remove threats, log potential hack attempts, etc. before a request is passed on to a processor. Because this is a cross cutting concern I'd prefer to find a solution that applies to all requests and affects all ways request values could be accessed, Reqest.Form, Action(model), Action(FormCollection), HttpContext.Current.Request.Form, etc. I'm using MVC and have considered creating custom model binders to clean the data before creating the model instance. But that would be application specific, require remembering to register every model binder and only apply to Action(model).

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • mysql GROUP_CONCAT

    - by user301766
    I want to list all users with their corropsonding user class. Here are simplified versions of my tables CREATE TABLE users ( user_id INT NOT NULL AUTO_INCREMENT, user_class VARCHAR(100), PRIMARY KEY (user_id) ); INSERT INTO users VALUES (1, '1'), (2, '2'), (3, '1,2'); CREATE TABLE classes ( class_id INT NOT NULL AUTO_INCREMENT, class_name VARCHAR(100), PRIMARY KEY (class_id) ); INSERT INTO classes VALUES (1, 'Class 1'), (2, 'Class 2'); And this is the query statement I am trying to use but is only returning the first matching user class and not a concatenated list as hoped. SELECT user_id, GROUP_CONCAT(DISTINCT class_name SEPARATOR ",") AS class_name FROM users, classes WHERE user_class IN (class_id) GROUP BY user_id; Actual Output +---------+------------+ | user_id | class_name | +---------+------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1 | +---------+------------+ Wanted Output +---------+---------------------+ | user_id | class_name | +---------+---------------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1, Class 2 | +---------+---------------------+ Thanks in advance

    Read the article

  • performance of parameterized queries for different db's

    - by tuinstoel
    A lot of people know that it is important to use parameterized queries to prevent sql injection attacks. Parameterized queries are also much faster in sqlite and oracle when doing online transaction processing because the query optimizer doesn't have to reparse every parameterized sql statement before executing. I've seen sqlite becoming 3 times faster when you use parameterized queries, oracle can become 10 times faster when you use parameterized queries in some extreme cases with a lot of concurrency. How about other db's like mysql, ms sql, db2 and postgresql? Is there an equal difference in performance between parameterized queries and literal queries?

    Read the article

  • Easy way to parse a url in C++ cross platform?

    - by Andrew Bucknell
    I need to parse a url to get the protocol host path and query in an application I am writing in c++. The application is intended to be cross platform. Im surprised I cant find anything that does this in boost or poco libraries. Is it somewhere obvious Im not looking? Any suggestions on appropriate open source libs? Or is this something I just have to do my self? Its not super complicated but it seems such a common task I am surprised there isnt a common solution.

    Read the article

  • MySQL/SQL: Update with correlated subquery from the updated table itself

    - by Roee Adler
    I have a generic question that I will try to explain using an example. Say I have a table with the fields: "id", "name", "category", "appearances" and "ratio" The idea is that I have several items, each related to a single category and "appears" several times. The ratio field should include the percentage of each item's appearances out of the total number of appearances of items in the category. In pseudo-code what I need is the following: For each category find the total sum of appearances for items related to it. For example it can be done with (select sum("appearances") from table group by category) For each item set the ratio value as the item's appearances divided by the sum found for the category above Now I'm trying to achieve this with a single update query, but can't seem to do it. What I thought I should do is: update Table T set T.ratio = T.appearances / ( select sum(S.appearances) from Table S where S.id = T.id ) But MySQL does not accept the alias T in the update column, and I did not find other ways of achieving this. Any ideas?

    Read the article

  • jQuery: Preventing list scroll?

    - by Legend
    I have this scenario: Using an ajax query I fetch some data items and push them into a ul element as an li element. I use $("ulele").append(new_li_item). I wrote my own custom scroll for this ul element using the following whenever an event is detected: $("ulele").animate({scrollTop: '+=' + 200}, 'slow'); The problem is when I fire that event and the list scrolls due to the animate function above, I want to keep it stable for at least a few seconds. When it scrolls down, elements are still being pushed so the list keeps scrolling no matter what. Is there a way I can pause this from happening without really stopping the activity of pushing elements into the ul list?

    Read the article

  • SQL syntax problem (multiple selects)

    - by user279521
    I am having problems retrieving accurate data values with my stored proc query below: CREATE PROCEDURE usp_InvoiceErrorLog @RecID int AS DECLARE @ErrorString as varchar(1000), @ErrorCode as int; Select @ErrorCode = ErrorCode from tbl_AcctRecv_WebRpt Where RecID = @RecID; IF NOT(@ErrorCode = NULL) Begin Select @ErrorString = ErrorDesc from tbl_ErrDesc Where ErrorCode = @ErrorCode End Select RecID, VendorNum, VendorName, InvNum, InvTotal, (SELECT CONVERT(VARCHAR(11), InvDate, 106) AS [DD MON YYYY]) As InvDate, TicketRequestor, ErrorCode, @ErrorString as ErrorDesc from tbl_AcctRecv_WebRpt Where RecID = @RecID The ErrorDesc column (in the final select statement at the bottom) returns a NULL value, when it should return a valid string data. Any ideas?

    Read the article

  • SQL Server Import table keeping default values

    - by Chrissi
    I am importing a table from one database to another in SQL Server 2008 by right-clicking the target database and choosing Tasks Import Data... When I import the table I get the column names and types and all the data fine, but I lose the primary key, identity specifications and all the default values that were set in the source table. So now I have to set all the default values for each column again manually. Is there any way to get the default values with the import, or even after with a Query? I am VERY new to this and flailing in the dark, so forgive me if this is a really stupid question...

    Read the article

  • PHP While() Stop Looping

    - by Axel
    Hi, i have a php loop which displays only one record even if there is hundreds. here is the code: <?php $result1 = mysql_query("SELECT * FROM posts") or die(mysql_error()); $numexem = mysql_num_rows($result1); $s="0"; while($s<$numexem){ $postid=mysql_result($result1,$s,"id"); echo "Post id:".$postid; $result2 = mysql_query("SELECT * FROM pics WHERE postid='$postid'") or die(mysql_error()); $rows = mysql_fetch_array($result2) or die(mysql_error()); $pnum = mysql_num_rows($result2); echo " There is ".$pnum." Attached Pictures"; $s++; } ?> I'm wondering if the loop stop because there is other SQL query inside it or what? and i don't think so. Thanks

    Read the article

  • SQLserver multithreaded locking with TABLOCKX

    - by WilfriedVS
    I have a table "tbluser" with 2 fields: userid = integer (autoincrement) user = nvarchar(100) I have a multithreaded/multi server application that uses this table. I want to accomplish the following: Guarantee that field user is unique in my table Guarantee that combination userid/user is unique in each server's memory I have the following stored procedure: CREATE PROCEDURE uniqueuser @user nvarchar(100) AS BEGIN BEGIN TRAN DECLARE @userID int SET nocount ON SET @userID = (SELECT @userID FROM tbluser WITH (TABLOCKX) WHERE [user] = @user) IF @userID <> '' BEGIN SELECT userID = @userID END ELSE BEGIN INSERT INTO tbluser([user]) VALUES (@user) SELECT userID = SCOPE_IDENTITY() END COMMIT TRAN END Basically the application calls the stored procedure and provides a username as parameter. The stored procedure either gets the userid or insert the user if it is a new user. Am I correct to assume that the table is locked (only one server can insert/query)?

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >