Search Results

Search found 28985 results on 1160 pages for 'sql training'.

Page 651/1160 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • Why are joins bad when considering scalability?

    - by acidzombie24
    Why are joins bad or 'slow'. I know i heard this more then once. I found this quote The problem is joins are relatively slow, especially over very large data sets, and if they are slow your website is slow. It takes a long time to get all those separate bits of information off disk and put them all together again. source I always thought they were fast especially when looking up a PK. Why are they 'slow'?

    Read the article

  • Parse both symbols . and , as decimal digits delimiter in ASP.NET

    - by abatishchev
    I'm writing a banking system and my customer wants support both Russian and American numeric standards in decimal digits delimiter. Respectively . and ,. Now only , works properly. Perhaps because of web server's OS format (Russian is set). String like 2000.00 throws a FormatException: Input string was not in a correct format. How to fix that? Are there any other ideas except String.Replace('.', ',') on FormView.ItemInserting event?

    Read the article

  • User has many computers, computers have many attributes in different tables, best way to JOIN?

    - by krismeld
    I have a table for users: USERS: ID | NAME | ---------------- 1 | JOHN | 2 | STEVE | a table for computers: COMPUTERS: ID | USER_ID | ------------------ 13 | 1 | 14 | 1 | a table for processors: PROCESSORS: ID | NAME | --------------------------- 27 | PROCESSOR TYPE 1 | 28 | PROCESSOR TYPE 2 | and a table for harddrives: HARDDRIVES: ID | NAME | ---------------------------| 35 | HARDDRIVE TYPE 25 | 36 | HARDDRIVE TYPE 90 | Each computer can have many attributes from the different attributes tables (processors, harddrives etc), so I have intersection tables like this, to link the attributes to the computers: COMPUTER_PROCESSORS: C_ID | P_ID | --------------| 13 | 27 | 13 | 28 | 14 | 27 | COMPUTER_HARDDRIVES: C_ID | H_ID | --------------| 13 | 35 | So user JOHN, with id 1 owns computer 13 and 14. Computer 13 has processor 27 and 28, and computer 13 has harddrive 35. Computer 14 has processor 27 and no harddrive. Given a user's id, I would like to retrieve a list of that user's computers with each computers attributes. I have figured out a query that gives me a somewhat of a result: SELECT computers.id, processors.id AS p_id, processors.name AS p_name, harddrives.id AS h_id, harddrives.name AS h_name, FROM computers JOIN computer_processors ON (computer_processors.c_id = computers.id) JOIN processors ON (processors.id = computer_processors.p_id) JOIN computer_harddrives ON (computer_harddrives.c_id = computers.id) JOIN harddrives ON (harddrives.id = computer_harddrives.h_id) WHERE computers.user_id = 1 Result: ID | P_ID | P_NAME | H_ID | H_NAME | ----------------------------------------------------------- 13 | 27 | PROCESSOR TYPE 1 | 35 | HARDDRIVE TYPE 25 | 13 | 28 | PROCESSOR TYPE 2 | 35 | HARDDRIVE TYPE 25 | But this has several problems... Computer 14 doesnt show up, because it has no harddrive. Can I somehow make an OUTER JOIN to make sure that all computers show up, even if there a some attributes they don't have? Computer 13 shows up twice, with the same harddrive listet for both. When more attributes are added to a computer (like 3 blocks of ram), the number of rows returned for that computer gets pretty big, and it makes it had to sort the result out in application code. Can I somehow make a query, that groups the two returned rows together? Or a query that returns NULL in the h_name column in the second row, so that all values returned are unique? EDIT: What I would like to return is something like this: ID | P_ID | P_NAME | H_ID | H_NAME | ----------------------------------------------------------- 13 | 27 | PROCESSOR TYPE 1 | 35 | HARDDRIVE TYPE 25 | 13 | 28 | PROCESSOR TYPE 2 | 35 | NULL | 14 | 27 | PROCESSOR TYPE 1 | NULL | NULL | Or whatever result that make it easy to turn it into an array like this [13] => [P_NAME] => [0] => PROCESSOR TYPE 1 [1] => PROCESSOR TYPE 2 [H_NAME] => [0] => HARDDRIVE TYPE 25 [14] => [P_NAME] => [0] => PROCESSOR TYPE 1

    Read the article

  • Parse filename, insert to SQL

    - by jakesankey
    Thanks to Code Poet, I am now working off of this code to parse all .txt files in a directory and store them in a database. I need a bit more help though... The file names are R303717COMP_148A2075_20100520.txt (the middle section is unique per file). I would like to add something to code so that it can parse out the R303717COMP and put that in the left column of the database such as: (this is not the only R number we have) R303717COMP data data data R303717COMP data data data R303717COMP data data data etc Lastly, I would like to have it store each full file name into another table that gets checked so that it doesn't get processed twice.. Any Help is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; namespace CSVImport { internal class Program { private static void Main(string[] args) { using (SQLiteConnection con = new SQLiteConnection("data source=data.db3")) { if (!File.Exists("data.db3")) { con.Open(); using (SQLiteCommand cmd = con.CreateCommand()) { cmd.CommandText = @" CREATE TABLE [Import] ( [RowId] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [FeatType] varchar, [FeatName] varchar, [Value] varchar, [Actual] decimal, [Nominal] decimal, [Dev] decimal, [TolMin] decimal, [TolPlus] decimal, [OutOfTol] decimal, [Comment] nvarchar);"; cmd.ExecuteNonQuery(); } con.Close(); } con.Open(); using (SQLiteCommand insertCommand = con.CreateCommand()) { insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, Comment) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @Comment);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string[] files = Directory.GetFiles(Environment.CurrentDirectory, "TextFile*.*"); foreach (string file in files) { string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] {','}); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.ExecuteNonQuery(); } } } con.Close(); } } } }

    Read the article

  • Indexes and multi column primary keys

    - by David Jenings
    Went searching and didn't find the answer to this specific noob question. My apologies if I missed it. In a MySQL database I have a table with the following primary key PRIMARY KEY id (invoice, item) In my application I will also frequently be selecting on "item" by itself and less frequently on only "invoice". I'm assuming I would benefit from indexes on these columns. MySQL does not complain when I define the following: INDEX (invoice), INDEX (item), PRIMARY KEY id (invoice, item) But I don't see any evidence (using DESCRIBE -- the only way I know how to look) that separate indexes have been established for these two columns. So the question is, are the columns that make up a primary key automatically indexed individually? Also, is there a better way than DESCRIBE to explore the structure of my table?

    Read the article

  • What are the Pros and Cons of Cascading delete and updates?

    - by Misnomer
    Hi, Maybe this is sort of a naive question...but I think that we should always have cascading deletes and updates. But I wanted to know are there problems with it and when should we should not do it? I really can't think of a case right now where you would not want to do an cascade delete but I am sure there is one...but what about updates should they be done always? So can anyone please list out the pros and cons of cascading deletes and updates ? Thanks.

    Read the article

  • A case-insensitive related implementation problem

    - by Robert
    Hi All, I am going through a final refinement posted by the client, which needs me to do a case-insesitive query. I will basically walk through how this simple program works. First of all, in my Java class, I did a fairly simple webpage parsing: title=(String)results.get("title"); doc = docBuilder.parse("http://" + server + ":" + port + "/exist/rest/db/wb/xql/media_lookup.xql?" + "&title=" + title); This Java statement references an XQuery file "media_lookup.xql" which is stored on localhost, and the only parameter we are passing is the string "title". Secondly, let's take at look at that XQuery file: $title := request:get-parameter('title',""), $mediaNodes := doc('/db/wb/portfolio/media_data.xml'), $query := $mediaNodes//media[contains(title,$title)], Then it will evaluate that query. This XQuery will get the "title" parameter that are passes from our Java class, and query the "media_data" xml file stored in the database, which contains a bunch of media nodes with a 'title' element node. As you may expect, this simple query will just match those media nodes whose 'title' element contains a substring of what the value of string 'title' is. So if our 'title' is "Chi", it will return media nodes whose title may be "Chicago" or "Chicken". The refinment request posted by the client is that there should be NO case-sensitivity. The very intuitive way is to modify the XQuery statement by using a lower-case funtion in it, like: $query := $mediaNodes//media[contains(lower-case(title/text(),lower-case($title))], However, the question comes: this modified query will run my machine into memory overflow. Since my "media_data.xml" is quite huge and contains thouands of millions of media nodes, I assume the lower-case() function will run on each of the entries, thus causing the machine to crash. I've talked with some experienced XQuery programmer, and they think I should use an index to solve this problem, and I will definitely research into that. But before that, I am just posting this problem here to get other ideas or any suggestions, do you think any other way may help? for example, could I tweak the Java parse statement to realize the case-insensitivity? Since I think I saw some people did some string concatination by using "contains." in Java before passing it to the server. Any idea or help is welcomed, thanks in advance.

    Read the article

  • UUID collision risk using different algorithms

    - by Diego Jancic
    Hi Guys, I have a database where 2 (or maybe 3 or 4) different applications are inserting information. The new information has IDs of the type GUID/UUID, but each application is using a different algorithm to generate the IDs. For example, one is using the NHibernate's "guid.comb", other is using the SQLServer's NEWID(), other might want to use .NET's Guid.NewGuid() implementation. Is there an above normal risk of ID collision or duplicates? Thanks!

    Read the article

  • Help ! How do I get the total number rows from my mssql paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my MSSQL database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my datalist. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a partcular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my datalist pager.Now I need the total number of all comments(not in the cuurent page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesnt work and raises an error.I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • MySQL query using multiple criteria from checkboxes

    - by jungle_programmer
    I would like to do a multiple search query usig multiple checkboxes which represent particular textboxes. How do i create a mysql query which will be filtering the checked and unchecked checkboxes (probably using if statements)? The query should be able to filter the checked and ucnchecked boxes and query them using the AND condition. Thanks

    Read the article

  • association of more than one model to a listview

    - by Veer
    I have 3 Tables in my database. Each table has 3 fields each, excluding the ID field. out of which 2 fields are of type nvarchar. None of the tables are related. My ListView in the application helps the user to search my database, the search being incremental. The search includes the nvarchar fields of the 3 tables ie, 6 fields in total. Eg: PhoneBook: Name, PhoneNo Notes: Title, Content Bookmarks: Name, url I've the models generated for the 3 tables. Now the ListBox should display the Ph.Name, Title and the Bo.Name fields. ie, It should be bound to them. But they are from different models. I also should be able to perform CRUD operation on the item searched. How would i do that? P.S: Separate ViewModels are created for each Model which are used for their respective views for handling those tables individually. But this is an integrated view where the user should be able to search everything. Also please somebody suggest me a better Title for this question:)

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • How to run an .exe application in another computer?

    - by ADAM
    I am working on a C# application in Visual Studio 2013. When I run the .exe file from my computer, the application runs very well and all the features work. When I tried to run the .exe on another computer, the database side doesn't work well and the connection with the database couldn't be opened. The SqlConnection is constructed as follows: SqlConnection cn = new SqlConnection("Data Source=ADAM-PC;Initial Catalog=integrationdatabase;Integrated Security=True" I don't know how to change the data source to make the connection with the database established in another computer. How can I solve this problem?

    Read the article

  • How to retrieve indentity column vaule after insert using LINQ

    - by Hobey
    Could any of you please show me how to complete the following tasks? // Prepare object to be saved // Note that MasterTable has MasterTableId as a Primary Key and it is an indentity column MasterTable masterTable = new MasterTable(); masterTable.Column1 = "Column 1 Value"; masterTable.Column2 = 111; // Instantiate DataContext DataContext myDataContext = new DataContext("<<ConnectionStrin>>"); // Save the record myDataContext.MasterTables.InsertOnSubmit(masterTable); myDataContext.SubmitChanges(); // ?QUESTION? // Now I need to retrieve the value of MasterTableId for the record just inserted above. Kind Regards

    Read the article

  • How to get comma-separated values in Linq?

    - by Mujtaba Hassan
    I have the query below: var users = (from a in dc.UserRoles join u in dc.Users on a.intUserId equals u.ID join r in dc.Roles on a.intRoleId equals r.ID where r.intClientId == clientID select new UserRoleDetail { ID = a.ID, intUserId = a.intUserId, intRoleId = a.intRoleId, Name =u.FullName, //Here I need comma separated values. intAssignedById = a.intAssignedById, RoleName = r.vchName, Function = u.vchFunction }); I require all the values of "Name =u.FullName" to be comma-separated in a single record group by intRoleId. I mean for every role I need all the usernames in a sigle record comma separated. Any suggestion?

    Read the article

  • mysql replace matching but not changing

    - by alex
    I've used mysql's update replace function before, but even though I think I'm following the same syntax, I can't get this to work-it matches the rows, but doesn't replace. Here's what I'm trying to do: mysql> update contained_widgets set preference_values = replace(preference_values, '<li><a_href="/enewsletter"><span class="not-tc">eNewsletter</span></a></li>', '<li><a_href="/enewsletter"><span class="not-tc">eNewsletter</span></a></li> <li> <a_href="/projects"><span class="not-tc">Projects</span></a></li>'); Query OK, 0 rows affected (0.00 sec) Rows matched: 77 Changed: 0 Warnings: 0 I don't see what I'm missing. Any help is appreciated. I edited "a " to "a_" because the site thinks I'm posting spam links otherwise.

    Read the article

  • How do I select and group by a portion of a string?

    - by Russ Bradberry
    Given I have data like the following, how can I select and group by portions of a string? Version Users 1.1.1 1 1.1.23 3 1.1.45 1 2.1.24 3 2.1.12 1 2.1.45 3 3.1.10 1 3.1.23 3 What I want is to sum up the users using version 1.1.x and 2.2.x and 3.3.x etc, but I'm not sure how I can group on a partial string in a select statement. edit What the data should return like is this: Version Users 1.1.XX 5 2.1.XX 7 3.1.XX 4 There is an infinite variable number of versions, some are in this format (major, minor, build) some are just major, minor and some are just major, the only time I want to "roll up" the versions is when there is a build.

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

  • How can I "merge", "flatten" or "pivot" results from a query which returns multiple rows into a sing

    - by dsm
    I have a simple query over a table, which returns results like the following: id id_type id_ref 2702 5 31 2702 16 14 2702 17 3 2702 40 1 2702 26 4 And I would like to merge the results into a single row, for instance: id concatenation 2702 5,16,17,40,26:31,14,3,1,4 Is there any way to do this within a trigger? NB: I know I can use a cursor, but I would really prefer not to unless there is no better way.

    Read the article

  • is there a better way to write this frankenstein LINQ query that searches for values in a child tabl

    - by MRV
    I have a table of Users and a one to many UserSkills table. I need to be able to search for users based on skills. This query takes a list of desired skills and searches for users who have those skills. I want to sort the users based on the number of desired skills they posses. So if a users only has 1 of 3 desired skills he will be further down the list than the user who has 3 of 3 desired skills. I start with my comma separated list of skill IDs that are being searched for: List<short> searchedSkillsRaw = skills.Value.Split(',').Select(i => short.Parse(i)).ToList(); I then filter out only the types of users that are searchable: List<User> users = (from u in db.Users where u.Verified == true && u.Level > 0 && u.Type == 1 && (u.UserDetail.City == city.SelectedValue || u.UserDetail.City == null) select u).ToList(); and then comes the crazy part: var fUsers = from u in users select new { u.Id, u.FirstName, u.LastName, u.UserName, UserPhone = u.UserDetail.Phone, UserSkills = (from uskills in u.UserSkills join skillsJoin in configSkills on uskills.SkillId equals skillsJoin.ValueIdInt into tempSkills from skillsJoin in tempSkills.DefaultIfEmpty() where uskills.UserId == u.Id select new { SkillId = uskills.SkillId, SkillName = skillsJoin.Name, SkillNameFound = searchedSkillsRaw.Contains(uskills.SkillId) }), UserSkillsFound = (from uskills in u.UserSkills where uskills.UserId == u.Id && searchedSkillsRaw.Contains(uskills.SkillId) select uskills.UserId).Count() } into userResults where userResults.UserSkillsFound > 0 orderby userResults.UserSkillsFound descending select userResults; and this works! But it seems super bloated and inefficient to me. Especially the secondary part that counts the number of skills found. Thanks for any advice you can give. --r

    Read the article

  • Alternative to NOT EXISTS

    - by Dave Colwell
    Hi all, I have two tables linked by an ID column, lets call them Table A and table B. My goal is to find all the records in table A that have no record in table B. For instance: Table A: ID----Value 1-----value1 2-----value2 3-----value3 4-----value4 Table B ID----Value 1-----x 2-----y 4-----z 4-----l As you can see, record with ID = 3 does not exist in table B, so i want a query that will give me record 3 from table A. the way i am currently doing this is by saying AND NOT EXISTS (SELECT ID FROM TableB) but since the tables are huge, the performance on this is terrible. Also, when i tried using a Left Join where TableB.ID is null, it didnt work. Can anyone suggest an alternative?

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >