Search Results

Search found 8340 results on 334 pages for 'merge join'.

Page 49/334 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • Merging sql queries to get different results by date

    - by pedalpete
    I am trying to build a 'recent events' feed and can't seem to get either my query correct, or figure out how to possible merge the results from two queries to sort them by date. One table holds games/, and another table holds the actions of these games/. I am trying to get the recent events to show users 1) the actions taken on games that are publicly visible (published) 2) when a new game is created and published. So, my actions table has actionId, gameid, userid, actiontype, lastupdate My games table has gameid, startDate, createdby, published, lastupdate I currently have a query like this (simplified for easy understanding I hope). SELECT actionId, actions.gameid, userid, actiontype, actions.lastupdate FROM actions JOIN ( SELECT games.gameid, startDate, createdby, published, games.lastupdate FROM games WHERE published=1 AND lastupdate>today-2 ) publishedGames on actions.gameid=games.gameid WHERE actions.type IN (0,4,5,6,7) AND actions.lastupdate>games.lastupdate and published=1 OR games.lastupdate>today-2 AND published=1 This query is looking for actions from published games where the action took place after the game was published. That pretty much takes care of the first thing that needs to be shown. However, I also need to get the results of the SELECT games.gameid, startDate, createdby, published, games.lastupdate FROM games WHERE published=1 AND startDate>today-2 so I can include in the actions list, when a new game has been published. When I run the query as I've got it written, I get all the actionids, and their gameids, but I don't get a row which shows the gameid when it was published. I understand that it may be possible that I need to run two seperate queries, and then somehow merge the results afterword with php, but I'm completely lost on where to start with that as well.

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • How can I synchronise two datatables and update the target in the database?

    - by Craig
    I am trying to synchronise two tables between two databases. I thought that the best way to do this would be to use the DataTable.Merge method. This seems to pick up the changes, but nothing ever gets committed to the database. So far I have: string tableName = "[Table_1]"; string sql = "select * from " + tableName; using (SqlConnection sourceConn = new SqlConnection(ConfigurationManager.ConnectionStrings["source"].ConnectionString)) { SqlDataAdapter sourceAdapter = new SqlDataAdapter(); sourceAdapter.SelectCommand = new SqlCommand(sql, sourceConn); sourceConn.Open(); DataSet sourceDs = new DataSet(); sourceAdapter.Fill(sourceDs); using (SqlConnection targetConn = new SqlConnection(ConfigurationManager.ConnectionStrings["target"].ConnectionString)) { SqlDataAdapter targetAdapter = new SqlDataAdapter(); targetAdapter.SelectCommand = new SqlCommand(sql, targetConn); SqlCommandBuilder builder = new SqlCommandBuilder(targetAdapter); targetAdapter.InsertCommand = builder.GetInsertCommand(); targetAdapter.UpdateCommand = builder.GetUpdateCommand(); targetAdapter.DeleteCommand = builder.GetDeleteCommand(); targetConn.Open(); DataSet targetDs = new DataSet(); targetAdapter.Fill(targetDs); targetDs.Tables[0].TableName = tableName; sourceDs.Tables[0].TableName = tableName; targetDs.Tables[0].Merge(sourceDs.Tables[0]); targetAdapter.Update(targetDs.Tables[0]); } } At the present time, there is one row in the source that is not in the target. This row is never transferred. I have also tried it with an empty target, and nothing is transferred.

    Read the article

  • Quality of TFS 2008 merged code

    - by paologios
    Does the quality of code merged by TFS 2008 depend on the used programming language? I know merging in Java / Subversion, and merging a branch to its trunk usually does not create much conflicts. Now in my company, we use VB.NET. When I merge two files TFS does not always get code blocks right, e.g. does not find the right If..then / end if lines. To give you an example, I mean: File 2 is created as a branch of File 1. Both files were changed later, now I'm going to merge those files and am recieving conficts: The marked end-if lines (1) are detected as corresponding, meaning the added event handler Button1_Click is being deleted. Now I wonder if this behavior is by language (C# vs. VB.NET) or are other source control solutions just better than TFS? (And I really liked TFS up to now :) ) File 1: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then Label1.Text = "Hello" Label2.Text = "World" End If End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then Label3.Text = "Hello Button 2" End If // .... End Sub File 2 (Branch of File 1): Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then fillTableFromDatabase() End If // (1) End Sub Protected Sub Button1_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button1.Click // do something here End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then End If // (1) // .... End Sub

    Read the article

  • Why cant Git merge file changes with a modified parent/master.

    - by Andy
    I have a file with one line in it. I create a branch and add a second line to the same file. Save and commit to the branch. I switch back to the master. And add a different, second line to the file. Save and commit to the master. So there's now 3 unique lines in total. If I now try and merge the branch back to the master, it suffers a merge conflict. Why cant Git simple merge each line, one after the other? My attempt at merge behaves something like this: PS D:\dev\testing\test1> git merge newbranch Auto-merging hello.txt CONFLICT (content): Merge conflict in hello.txt Automatic merge failed; fix conflicts and then commit the result. PS D:\dev\testing\test1> git diff diff --cc hello.txt index 726eeaf,e48d31a..0000000 --- a/hello.txt +++ b/hello.txt @@@ -1,2 -1,2 +1,6 @@@ This is the first line. - New line added by master. -Added a line in newbranch. ++<<<<<<< HEAD ++New line added by master. ++======= ++Added a line in newbranch. ++>>>>>>> newbranch Is there a way to make it slot lines in automatically, one after the other?

    Read the article

  • Shared Git repo syncing to svn causing git svn rebase to pollute repo with a log of no-op merge prob

    - by John K
    This wasn't so bad at the beginning, but now I have hundreds of no-op merge problems (solved by git rebase --skip). I have setup a shared git repo for my group because it is easier to deal with. But the company uses SVN so I have to keep SVN in sync with GIT. Worked like a dream at first, but after weeks of doing this GIT is giving me a lot of the following errors. Applying: * making all config actions work Using index info to reconstruct a base tree... Falling back to patching base and 3-way merge... Auto-merging app/controllers/vulnerabilities_controller.rb CONFLICT (content): Merge conflict in app/controllers/vulnerabilities_controller.rb Auto-merging public/javascripts/network_analysis_vulnerability_config.js CONFLICT (content): Merge conflict in public/javascripts/network_analysis_vulnerability_config.js Failed to merge in the changes. Patch failed at 0046 * making all config actions work My workflow: git co master git pull origin git svn rebase ... deal with no-op merge problems ... git svn dcommit git pull origin git push origin The problem is that what is in SVN is the correct so I use git rebase --skip, but I have to do that hundreds of times before I can dcommit. How do I clear these merge problems permanently?

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • Why won't JPA delete owned entities when the owner entity loses the reference to them?

    - by Nick
    Hi! I've got a JPA entity "Request", that owns a List of Answers (also JPA entities). Here's how it's defined in Request.java: @OneToMany(cascade= CascadeType.ALL, mappedBy="request") private List<Answer> answerList; And in Answer.java: @JoinColumn(name = "request", referencedColumnName="id") @ManyToOne(optional = false) private Request request; In the course of program execution, the Request's List of Answers may have Answers added or removed from it, or the actual List object may be replaced. My problem is thus: when I merge a Request to the database, the Answer objects that used to be in the List are kept in the database -- that is, Answer objects that the Request no longer holds a reference to (indirectly, via a List) are not deleted. This is not the behaviour I desire, as if I merge a Request to the database, and then fetch it again, its Answers List may not be the same. Am I making some programming mistake? Is there an annotaion or setting that will ensure that the Answers in the database are exactly the Answers in the Request's List? A solution is to keep references to the original Answers List and then use the EntityManager to remove each old Answer before merging the Request, but it seems like there should be a cleaner way. Thank you!

    Read the article

  • Merge 2 XML files with xslt

    - by MADAL
    Hello, I try to merge 2 XML-Files. I use XSLT: [http://www2.informatik.hu-berlin.de/...erge.xslt.html] I need to change this XSLT-File in order to have another result. My first file, which needs to be merged: <A1> <A2> <A3> <b>a</b> <c>b</c> </A3> </A2> </A1> <A1> <A2> <A3> <b>1</b> <c>2</c> </A3> </A2> </A1> ... My 2. file, which needs to be merged whithe the first one: <A1> <A2> <A3> <b>x</b> <c>y</c> </A3> </A2> </A1> <A1> <A2> <A3> <b>8</b> <c>9</c> </A3> </A2> </A1> The result should be: <A1> <A2> <A3> <b>a</b><bx><b>a</b></bx><bx><b>x</b></bx> <c>by</c> </A3> </A2> </A1> <A1> <A2> <A3> <b>1</b><bx><b>1</b></bx><bx><b>18</b></bx> <c>29</c> </A3> </A2> </A1> Could anybody help me with that?!!!! Regards

    Read the article

  • ROW_NUMBER() VS. DISTINCT

    - by ramadan2050
    Dear All, I have a problem with ROW_NUMBER() , if i used it with DISTINCT in the following Query I have 2 scenarios: 1- run this query direct : give me for example 400 record as a result 2- uncomment a line which start with [--Uncomment1--] : give me 700 record as a result it duplicated some records not all the records what I want is to solve this problem or to find any way to show a row counter beside each row, to make a [where rownumber between 1 and 30] --Uncomment2-- if I put the whole query in a table, and then filter it , it is work but it still so slow waiting for any feedback and I will appreciate that Thanks in advance SELECT * FROM (SELECT Distinct CRSTask.ID AS TaskID, CRSTask.WFLTaskID, --Uncomment1-- ROW_NUMBER() OVER (ORDER By CRSTask.CreateDate asc ) AS RowNum , CRSTask.WFLStatus AS Task_WFLStatus, CRSTask.Name AS StepName, CRSTask.ModifiedDate AS Task_ModifyDate, CRSTask.SendingDate AS Task_SendingDate, CRSTask.ReceiveDate AS Task_ReceiveDate, CRSTask.CreateDate AS Task_CreateDate, CRS_Task_Recipient_Vw.Task_CurrentSenderName, CRS_Task_Recipient_Vw.Task_SenderName, CRS_INFO.ID AS CRS_ID, CRS_INFO.ReferenceNumber, CRS_INFO.CRSBeneficiaries, CRS_INFO.BarCodeNumber, ISNULL(dbo.CRS_FNC_GetTaskReceiver(CRSTask.ID), '') + ' ' + ISNULL (CRS_Organization.ArName, '') AS OrgName, CRS_Info.IncidentID, COALESCE(CRS_Subject.ArSubject, '??? ????') AS ArSubject, COALESCE(CRS_INFO.Subject, 'Blank Subject') AS CRS_Subject, CRS_INFO.Mode, CRS_Task_Recipient_Vw.ReceiverID, CRS_Task_Recipient_Vw.ReceiverType, CRS_Task_Recipient_Vw.CC, Temp_Portal_Users_View.ID AS CRS_LockedByID, Temp_Portal_Users_View.ArabicName AS CRS_LockedByName, CRSDraft.ID AS DraftID, CRSDraft.Type AS DraftType, CASE WHEN CRS_Folder = 1 THEN Task_SenderName WHEN CRS_Folder = 2 THEN Task_SenderName WHEN CRS_Folder = 3 THEN Task_CurrentSenderName END AS SenderName, CRS_Task_Folder_Vw.CRS_Folder, CRS_INFO.Status, CRS_INFO.CRS_Type, CRS_Type.arName AS CRS_Type_Name FROM CRS_Task_Folder_Vw LEFT OUTER JOIN CRSTask ON CRSTask.ID = CRS_Task_Folder_Vw.TaskID LEFT OUTER JOIN CRS_INFO ON CRS_INFO.ID = CRSTask.CRSID LEFT OUTER JOIN CRS_Subject ON COALESCE( SUBSTRING( CRS_INFO.Subject, CHARINDEX('_', CRS_INFO.Subject) + 1, LEN(CRS_INFO.Subject) ), 'Blank Subject' ) = CRS_Subject.ID LEFT OUTER JOIN CRSInfoAttribute ON CRS_INFO.ID = CRSInfoAttribute.ID LEFT OUTER JOIN CRS_Organization ON CRS_Organization.ID = CRSInfoAttribute.SourceID LEFT OUTER JOIN CRS_Type ON CRS_INFO.CRS_Type = CRS_Type.ID LEFT OUTER JOIN CRS_Way ON CRS_INFO.CRS_Send_Way = CRS_Way.ID LEFT OUTER JOIN CRS_Priority ON CRS_INFO.CRS_Priority_ID = CRS_Priority.ID LEFT OUTER JOIN CRS_SecurityLevel ON CRS_INFO.SecurityLevelID = CRS_SecurityLevel.ID LEFT OUTER JOIN Portal_Users_View ON Portal_Users_View.ID = CRS_INFO.CRS_Initiator LEFT OUTER JOIN AD_DOC_TBL ON CRS_INFO.DocumentID = AD_DOC_TBL.ID LEFT OUTER JOIN CRSTask AS Temp_CRSTask ON CRSTask.ParentTask = Temp_CRSTask.ID LEFT OUTER JOIN Portal_Users_View AS Temp_Portal_Users_View ON Temp_Portal_Users_View.ID = AD_DOC_TBL.Lock_User_ID LEFT OUTER JOIN Portal_Users_View AS Temp1_Portal_Users_View ON Temp1_Portal_Users_View.ID = CRS_INFO.ClosedBy LEFT OUTER JOIN CRSDraft ON CRSTask.ID = CRSDraft.TaskID LEFT OUTER JOIN CRS_Task_Recipient_Vw ON CRSTask.ID = CRS_Task_Recipient_Vw.TaskID --LEFT OUTER JOIN CRSTaskReceiverUsers ON CRSTask.ID = CRSTaskReceiverUsers.CRSTaskID AND CRS_Task_Recipient_Vw.ReceiverID = CRSTaskReceiverUsers.ReceiverID LEFT OUTER JOIN CRSTaskReceiverUserProfile ON CRSTask.ID = CRSTaskReceiverUserProfile.TaskID WHERE Crs_Info.SUBJECT <> 'Blank Subject' AND (CRS_INFO.Subject NOT LIKE '%null%') AND CRS_Info.IsDeleted <> 1 /* AND CRSTask.WFLStatus <> 6 AND CRSTask.WFLStatus <> 8 */ AND ( ( CRS_Task_Recipient_Vw.ReceiverID IN (1, 29) AND CRS_Task_Recipient_Vw.ReceiverType IN (1, 3, 4) ) ) AND 1 = 1 )Codes --Uncomment2-- WHERE Codes.RowNum BETWEEN 1 AND 30 ORDER BY Codes.Task_CreateDate ASC

    Read the article

  • Branching and Merging with TortoiseSVN

    - by capgpilk
    For this example I am using Visual Studio 2010, TortoiseSVN 1.6.6, Subversion 1.6.6 and AnkhSVN 2.1.7819.411, so if you are using different versions, some of these screen shots may differ. This is assuming you have your code checked in to the trunk directory and have a standard SVN structure of trunk, branches and tags. There are a number of developers who prefer to develop solely in a branch and never touch the trunk, but the process is generally the same and you may be on a small team and prefer to work in the trunk and branch occasionally. There are three steps to successful branching. First you branch, then when you are ready you need to reintegrate any changes that other developers may have made to the trunk in to your branch. Then finally when your branch and the trunk are in sync, you merge it back in to the trunk. Branch Right click project root in Windows Explorer > TortoiseSVN > Branch/Tag Enter the branch label in the ‘To URL’ box. For example /branches/1.1 Choose Head revision Check Switch working copy Click OK Make any changes to branch Make any changes to trunk Commit any changes For this example I copied the project to another location prior to branching and made changes to that using Notepad++. Then committed it to SVN, as this directory is mapped to the trunk, that is what gets updated.   Merge Trunk with Branch Right click project root in Windows Explorer > TortoiseSVN > Merge Choose ‘Merge a range of revisions’ In ‘URL to merge from’ choose your trunk Click Next, then the ‘test merge’ button. This will highlight any conflicts. Here we have one conflict we will need to resolve because we made a change and checked in to trunk earlier Click merge. Now we have the opportunity to edit that conflict This will open up TortoiseMerge which will allow us to resolve the issue. In this case I want both changes. Perform an Update then Commit Reloading in Visual Studio shows we have all changes that have been made to both trunk and branch. Merge Branch with Trunk Switch working copy by right clicking project root in Windows Explorer > TortoiseSVN > switch Switch to the trunk then ok Right click project root in Windows Explorer > TortoiseSVN > merge Choose ‘Reintegrate a branch’ In ‘From URL’ choose your branch then next Click ‘Test merge’, this shouldn’t show any conflicts Click Merge Perform Update then Commit Open project in Visual Studio, we now have all changes. So there we have it we are connected back to the trunk and have all the updates merged.

    Read the article

  • Querying a self referencing join with NHibernate Linq

    - by Ben
    In my application I have a Category domain object. Category has a property Parent (of type category). So in my NHibernate mapping I have: <many-to-one name="Parent" column="ParentID"/> Before I switched to NHibernate I had the ParentId property on my domain model (mapped to the corresponding database column). This made it easy to query for say all top level categories (ParentID = 0): where(c => c.ParentId == 0) However, I have since removed the ParentId property from my domain model (because of NHibernate) so I now have to do the same query (using NHibernate.Linq) like so: public IList<Category> GetCategories(int parentId) { if (parentId == 0) return _catalogRepository.Categories.Where(x => x.Parent == null).ToList(); else return _catalogRepository.Categories.Where(x => x.Parent.Id == parentId).ToList(); } The real impact that I can see, is the sql generated. Instead of a simple 'select x,y,z from categories where parentid = 0' NHibernate generates a left outer join: SELECT this_.CategoryId as CategoryId4_1_, this_.ParentID as ParentID4_1_, this_.Name as Name4_1_, this_.Slug as Slug4_1_, parent1_.CategoryId as CategoryId4_0_, parent1_.ParentID as ParentID4_0_, parent1_.Name as Name4_0_, parent1_.Slug as Slug4_0_ FROM Categories this_ left outer join Categories parent1_ on this_.ParentID = parent1_.CategoryId WHERE this_.ParentID is null Which doesn't seems much less efficient that what I had before. Is there a better way of querying these self referencing joins as it's very tempting to drop the ParentID back onto my domain model for this reason. Thanks, Ben

    Read the article

  • sqlite3 JOIN, GROUP_CONCAT using distinct with custom separator

    - by aiwilliams
    Given a table of "events" where each event may be associated with zero or more "speakers" and zero or more "terms", those records associated with the events through join tables, I need to produce a table of all events with a column in each row which represents the list of "speaker_names" and "term_names" associated with each event. However, when I run my query, I have duplication in the speaker_names and term_names values, since the join tables produce a row per association for each of the speakers and terms of the events: 1|Soccer|Bobby|Ball 2|Baseball|Bobby - Bobby - Bobby|Ball - Bat - Helmets 3|Football|Bobby - Jane - Bobby - Jane|Ball - Ball - Helmets - Helmets The group_concat aggregate function has the ability to use 'distinct', which removes the duplication, though sadly it does not support that alongside the custom separator, which I really need. I am left with these results: 1|Soccer|Bobby|Ball 2|Baseball|Bobby|Ball,Bat,Helmets 3|Football|Bobby,Jane|Ball,Helmets My question is this: Is there a way I can form the query or change the data structures in order to get my desired results? Keep in mind this is a sqlite3 query I need, and I cannot add custom C aggregate functions, as this is for an Android deployment. I have created a gist which makes it easy for you to test a possible solution: https://gist.github.com/4072840

    Read the article

  • Retrive data from two tables in asp.net mvc using ADO.Net Entity Framework

    - by user192972
    Please read my question carefully and reply me. I have two tables as table1 and table2. In table1 i have columns as AddressID(Primary Key),Address1,Address2,City In table2 i have columns as ContactID(Primary Key),AddressID(Foriegn Key),Last Name,First Name. By using join operation i can retrive data from both the tables. I created a Model in my MVC Application.I can see both the tables in enitity editor. In the ViewData folder of my solution explorer i created two class as ContactViewData.cs and SLXRepository.cs In the ContactViewData.cs i have following code public IEnumerable<CONTACT> contacts { get; set; } In the SLXRepository.cs i have following code public IEnumerable<CONTACT> GetContacts() { var contact = ( from c in context.CONTACT join a in context.ADDRESS on c.ADDRESSID equals a.ADDRESSID select new { a.ADDRESS1, a.ADDRESS2, a.CITY, c.FIRSTNAME, c.LASTNAME } ); return contact; } I am getting the error in return type Cannot implicitly convert type 'System.Linq.IQueryable' to 'System.Collections.Generic.IEnumerable'. An explicit conversion exists (are you missing a cast?)

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • MYSQL fetch 10 posts, each w/ vote count, sorted by vote count, limited by where clause on posts

    - by nibblebot
    I want to fetch a set of Posts w/ vote count listed, sorted by vote count (e.g.) Post 1 - Post Body blah blah - Votes: 500 Post 2 - Post Body blah blah - Votes: 400 Post 3 - Post Body blah blah - Votes: 300 Post 4 - Post Body blah blah - Votes: 200 I have 2 tables: Posts - columns - id, body, is_hidden Votes - columns - id, post_id, vote_type_id Here is the query I've tried: SELECT p.*, v.yes_count FROM posts p LEFT JOIN (SELECT post_id, vote_type_id, COUNT(1) AS yes_count FROM votes WHERE (vote_type_id = 1) GROUP BY post_id ORDER BY yes_count DESC LIMIT 0, 10) v ON v.post_id = p.id WHERE (p.is_hidden = 0) ORDER BY yes_count DESC LIMIT 0, 10 Correctness: The above query almost works. The subselect is including votes for posts that have is_hidden = 1, so when I left join it to posts, if a hidden post is in the top 10 (ranked by votes), I can end up with records with NULL on the yes_count field. Performance: I have ~50k posts and ~500k votes. On my dev machine, the above query is running in .4sec. I'd like to stay at or below this execution time. Indexes: I have an index on the Votes table that covers the fields: vote_type_id and post_id EXPLAIN id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY p ALL NULL NULL NULL NULL 45985 Using where; Using temporary; Using filesort 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 2 DERIVED votes ref VotingPost VotingPost 4 319881 Using where; Using index; Using temporary; Using filesort

    Read the article

  • GET variable after doing a Join Query

    - by John
    Hello, For the code below, on the second link (http://www...com/sandbox/comments/index.php?submission='.$row["title"].'), I would like to pass $row["submissionid"], on as a GET variable. I tried this and it caused all of the code below to produce a blank result. Is there a way that I can do I want? Thanks in advance, John $sqlStr = "SELECT s.loginid ,s.title ,s.url ,s.displayurl ,l.username ,COUNT(c.commentid) countComments FROM submission s INNER JOIN login l ON s.loginid = l.loginid LEFT OUTER JOIN comment c ON s.submissionid = c.submissionid GROUP BY s.submissionid ORDER BY s.datesubmitted DESC LIMIT 10"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"samplesrec\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="sitename1"><a href="http://www.'.$row["url"].'">'.$row["title"].'</a></td>'; echo '</tr>'; echo '<tr>'; echo '<td class="sitename2"><a href="http://www...com/sandbox/members/index.php?profile='.$row["username"].'">'.$row["username"].'</a><a href="http://www...com/sandbox/comments/index.php?submission='.$row["title"].'">'.$row["countComments"].'</a></td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • how to join tables sql server

    - by Rick
    Im having some trouble with joining two tables. This is what my two tables look like: Table 1 Customer_ID CustomerName Add. 1000 John Smith 1001 Mike Coles 1002 Sam Carter Table 2 Sensor_ID Location Temp CustIDFK 1000 NY 70 1002 NY 70 1000 ... ... 1001 1001 1002 Desired: Sensor_ID Location Temp CustIDFK 1000 NY 70 John Smith 1002 NY 70 Sam Carter 1000 ... ... John Smith 1001 Mike Coles 1001 1002 I have made Customer_ID from table 1 my primary key, created custIDFK in table 2 and set that as my foreign key. I am really new to sql server so I am still having trouble with the whole relationship piece of it. My goal is to match one customer_ID with one Sensor_ID. The problem is that the table 2 does not have "unique IDs" since they repeat so I cant set that to my foreign key. I know I will have to do either an inner join or outer join, I just dont know how to link the sensor id with customer one. I was thinking of giving my sensor_ID a unique ID but the data that is being inserted into table 2 is coming from another program. Any suggestions?

    Read the article

  • Linq join two dictionaries using a common key

    - by rboarman
    Hello, I am trying to join two Dictionary collections together based on a common lookup value. var idList = new Dictionary<int, int>(); idList.Add(1, 1); idList.Add(3, 3); idList.Add(5, 5); var lookupList = new Dictionary<int, int>(); lookupList.Add(1, 1000); lookupList.Add(2, 1001); lookupList.Add(3, 1002); lookupList.Add(4, 1003); lookupList.Add(5, 1004); lookupList.Add(6, 1005); lookupList.Add(7, 1006); // Something like this: var q = from id in idList.Keys join entry in lookupList on entry.Key equals id select entry.Value; The Linq statement above is only an example and does not compile. For each entry in the idList, pull the value from the lookupList based on matching Keys. The result should be a list of Values from lookupList (1000, 1002, 1004). What’s the easiest way to do this using Linq? Thank you, Rick

    Read the article

  • Join Query Not Working

    - by John
    Hello, I am using three MySQl tables: comment commentid loginid submissionid comment datecommented login loginid username password email actcode disabled activated created points submission submissionid loginid title url displayurl datesubmitted In these three tables, the "loginid" correspond. I would like to pull the top 10 loginids based on the number of "submissionid"s. I would like to display them in a 3-column HTML table that shows the "username" in the first column, the number of "submissionid"s in the second column, and the number of "commentid"s in the third column. I tried using the query below but it did not work. Any idea why not? Thanks in advance, John $sqlStr = "SELECT l.username ,l.loginid ,c.commentid ,count(s.commentid) countComments ,c.comment ,c.datecommented ,s.submissionid ,count(s.submissionid) countSubmissions ,s.title ,s.url ,s.displayurl ,s.datesubmitted FROM comment AS c INNER JOIN login AS l ON c.loginid = l.loginid INNER JOIN submission AS s ON c.loginid = s.loginid GROUP BY c.loginid ORDER BY countSubmissions DESC LIMIT 10"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"samplesrec1\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="sitename1"><a href="http://www...com/.../members/index.php?profile='.$row["username"].'">'.stripslashes($row["username"]).'</a></td>'; echo '</tr>'; echo '<td class="sitename1">'.stripslashes($row["countSubmissions"]).'</td>'; echo '</tr>'; echo '</tr>'; echo '<td class="sitename1">'.stripslashes($row["countComments"]).'</td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >