Search Results

Search found 31328 results on 1254 pages for 'sql join'.

Page 477/1254 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • How do I get a right outer join in L2E?

    - by Dan
    I have two tables that I set up through the VS Entity Data Model Diagram tool. I'm trying to do a right outer join and it doesn't return results from the 2nd table. I have set up a 0..1 to MANY relationship from the diagram tool. When I run a Linq-To-Entities query, it still defaults to an INNER JOIN. From my understanding of entities, if I set up the relationship using VS, when I join the tables, it should automagically figure out the join syntax based on the relationship I supply. It doesn't seem to be doing that. I am using EF v1 (not Linq-to-Sql). Query I'm running: from s in SomeTable join t in SomeOtherTable on s.ID equals t.ID select new { s.MyFieldName, t.MyOtherFieldName }

    Read the article

  • JavaScript: String Concatenation slow performance? Array.join('')?

    - by NickNick
    I've read that if I have a for loop, I should not use string concation because it's slow. Such as: for (i=0;i<10000000;i++) { str += 'a'; } And instead, I should use Array.join(), since it's much faster: var tmp = []; for (i=0;i<10000000;i++) { tmp.push('a'); } var str = tmp.join(''); However, I have also read that string concatention is ONLY a problem for Internet Explorer and that browsers such as Safari/Chrome, which use Webkit, actually perform FASTER is using string concatention than Array.join(). I've attempting to find a performance comparison between all major browser of string concatenation vs Array.join() and haven't been able to find one. As such, what is faster and more efficient JavaScript code? Using string concatenation or Array.join()?

    Read the article

  • Opening an SQL CE file at runtime with Entity Framework 4

    - by David Veeneman
    I am getting started with Entity Framework 4, and I an creating a demo app as a learning exercise. The app is a simple documentation builder, and it uses a SQL CE store. Each documentation project has its own SQL CE data file, and the user opens one of these files to work on a project. The EDM is very simple. A documentation project is comprised of a list of subjects, each of which has a title, a description, and zero or more notes. So, my entities are Subject, which contains Title and Text properties, and Note, which has Title and Text properties. There is a one-to-many association from Subject to Note. I am trying to figure out how to open an SQL CE data file. A data file must match the schema of the SQL CE database created by EF4's Create Database Wizard, and I will implement a New File use case elsewhere in the app to implement that requirement. Right now, I am just trying to get an existing data file open in the app. I have reproduced my existing 'Open File' code below. I have set it up as a static service class called File Services. The code isn't working quite yet, but there is enough to show what I am trying to do. I am trying to hold the ObjectContext open for entity object updates, disposing it when the file is closed. So, here is my question: Am I on the right track? What do I need to change to make this code work with EF4? Is there an example of how to do this properly? Thanks for your help. My existing code: public static class FileServices { #region Private Fields // Member variables private static EntityConnection m_EntityConnection; private static ObjectContext m_ObjectContext; #endregion #region Service Methods /// <summary> /// Opens an SQL CE database file. /// </summary> /// <param name="filePath">The path to the SQL CE file to open.</param> /// <param name="viewModel">The main window view model.</param> public static void OpenSqlCeFile(string filePath, MainWindowViewModel viewModel) { // Configure an SQL CE connection string var sqlCeConnectionString = string.Format("Data Source={0}", filePath); // Configure an EDM connection string var builder = new EntityConnectionStringBuilder(); builder.Metadata = "res://*/EF4Model.csdl|res://*/EF4Model.ssdl|res://*/EF4Model.msl"; builder.Provider = "System.Data.SqlServerCe"; builder.ProviderConnectionString = sqlCeConnectionString; var entityConnectionString = builder.ToString(); // Connect to the model m_EntityConnection = new EntityConnection(entityConnectionString); m_EntityConnection.Open(); // Create an object context m_ObjectContext = new Model1Container(); // Get all Subject data IQueryable<Subject> subjects = from s in Subjects orderby s.Title select s; // Set view model data property viewModel.Subjects = new ObservableCollection<Subject>(subjects); } /// <summary> /// Closes an SQL CE database file. /// </summary> public static void CloseSqlCeFile() { m_EntityConnection.Close(); m_ObjectContext.Dispose(); } #endregion }

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • NHibernate Query across multiple tables

    - by Dai Bok
    I am using NHibernate, and am trying to figure out how to write a query, that searchs all the names of my entities, and lists the results. As a simple example, I have the following objects; public class Cat { public string name {get; set;} } public class Dog { public string name {get; set;} } public class Owner { public string firstname {get; set;} public string lastname {get; set;} } Eventaully I want to create a query , say for example, which and returns all the pet owners with an name containing "ted", OR pets with a name containing "ted". Here is an example of the SQL I want to execute: SELECT TOP 10 d.*, c.*, o.* FROM owners AS o INNER JOIN dogs AS d ON o.id = d.ownerId INNER JOIN cats AS c ON o.id = c.ownerId WHERE o.lastname like '%ted%' OR o.firstname like '%ted%' OR c.name like '%ted%' OR d.name like '%ted%' When I do it using Criteria like this: var criteria = session.CreateCriteria<Owner>() .Add( Restrictions.Disjunction() .Add(Restrictions.Like("FirstName", keyword, MatchMode.Anywhere)) .Add(Restrictions.Like("LastName", keyword, MatchMode.Anywhere)) ) .CreateCriteria("Dog").Add(Restrictions.Like("Name", keyword, MatchMode.Anywhere)) .CreateCriteria("Cat").Add(Restrictions.Like("Name", keyword, MatchMode.Anywhere)); return criteria.List<Owner>(); The following query is generated: SELECT TOP 10 d.*, c.*, o.* FROM owners AS o INNER JOIN dogs AS d ON o.id = d.ownerId INNER JOIN cats AS c ON o.id = c.ownerId WHERE o.lastname like '%ted%' OR o.firstname like '%ted%' AND d.name like '%ted%' AND c.name like '%ted%' How can I adjust my query so that the .CreateCriteria("Dog") and .CreateCriteria("Cat") generate an OR instead of the AND? thanks for your help.

    Read the article

  • Common Table Expressions slow when using a table variable

    - by Phil Haselden
    I have been experimenting with the following (simplified) CTE. When using a table variable () the query runs for minutes before I cancel it. Any of the other commented out methods return in less than a second. If I replace the whole WHERE clause with an INNER JOIN it is fast as well. Any ideas why using a table variable would run so slowly? FWIW: The database contains 2.5 million records and the inner query returns 2 records. CREATE TABLE #rootTempTable (RootID int PRIMARY KEY) INSERT INTO #rootTempTable VALUES (1360); DECLARE @rootTableVar TABLE (RootID int PRIMARY KEY); INSERT INTO @rootTableVar VALUES (1360); WITH My_CTE AS ( SELECT ROW_NUMBER() OVER(ORDER BY d.DocumentID) rownum, d.DocumentID, d.Title FROM [Document] d WHERE d.LocationID IN ( SELECT LocationID FROM Location JOIN @rootTableVar rtv ON Location.RootID = rtv.RootID -- VERY SLOW! --JOIN #rootTempTable tt ON Location.RootID = tt.RootID -- Fast --JOIN (SELECT 1360 as RootID) AS rt ON Location.RootID = rt.RootID -- Fast --WHERE RootID = 1360 -- Fast ) ) SELECT * FROM My_CTE WHERE (rownum > 0) AND (rownum <= 100) ORDER BY rownum

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • Complicated SQL query

    - by Yandawl
    Please bare with me this is difficult to explain xD http://img90.imageshack.us/i/croppercapture1.png/ This is based on an undergraduate degree course where a student takes many units (say 4 core units and 1 optional unit per year). tblAwardCoreUnits and tblAwardOptUnits store which units are optional and core for which award, hence the relationship to tblAward and tblStudentCoreUnits and tblStudentOptUnits store the particular instances of those units which a particular student is taking. Secondly, a unit can have multiple events (say a lecture and a unit) and each of those events has sessions in which a student can attend, hence tblEvents, tblSessions and tblAttendances. The query I am trying to produce is to get a list of all level one students, grouped by their award that lists the percentage of attendances in all the units in the current level. I've tried and tried with this and the following is the best I've managed to come up with so far... I'd REALLY appreciate any help you can give with this! SELECT tblStudents.enrolmentNo, tblStudents.forename, tblStudents.surname, tblAwards.title, (SELECT COUNT((tblAttendances.attended + tblAttendances.authorisedAbsence)) AS SumOfAttendances FROM tblAttendances INNER JOIN (tblStudents ON tblStudents.enrolmentNo = tblAttendances.enrolmentNo)) / FROM tblUnits, tblAwards INNER JOIN ((tblStudents INNER JOIN tblStudentOptUnit ON tblStudents.studentID = tblStudentOptUnit.studentID) INNER JOIN tblStudentCoreUnit ON tblStudents.studentID = tblStudentCoreUnit.studentID) ON tblAwards.awardID = tblStudents.awardID WHERE (((tblStudents.level)="1") AND ((tblStudents.status)="enrolled")) GROUP BY tblAwards.title ORDER BY tblStudents.forname;

    Read the article

  • how to rotate around each record in linq and show that in view

    - by Sadegh
    Hi, I have four tables in my database and i want to join that's and return record's and show that into searchController! My query is this: public IQueryable PerformSearch(string query) { if (!string.IsNullOrEmpty(query)) { var results = from tbl1 in context.Table1 join tbl2 in context.Table2 on tbl1.Id equals tbl2.Id join tbl3 in context.Table3 on tbl2.Id equals tbl3.Id join tbl4 in context.Table4 on tbl3.Id equals tbl4.Id where tbl1.col2.Contains(query) orderby tbl1.Count descending select new { col1 = tbl1.Col1, col1 = tbl1.Col1, col1 = tbl1.Col1, . . . }; return results.AsQueryable(); } else return null; } And this method called in SearchController as below: public class SearchController : System.Web.Mvc.Controller { public System.Web.Mvc.ActionResult Search(System.String query) { var search = new Search(); ViewData["result"] = search.PerformSearch(query); return View("Search"); } } I don't know how i can rotate around each record (plus vs intellisense feature) returned by PeformSeach method and show that in view! Also is this a good way? thanks in advance

    Read the article

  • T-SQL Query, combine columns from multiple rows into single column

    - by Shayne
    I have seeen some examples of what I am trying to do using COALESCE and FOR XML (seems like the better solution). I just can't quite get the syntax right. Here is what I have (I will shorten the fields to only the key ones): Table Fields ------ ------------------------------- Requisition ID, Number IssuedPO ID, Number Job ID, Number Job_Activity ID, JobID (fkey) RequisitionItems ID, RequisitionID(fkey), IssuedPOID(fkey), Job_ActivityID (fkey) I need a query that will list ONE Requisition per line with its associated Jobs and IssuedPOs. (The requisition number start with "R-" and the Job Number start with "J-"). Example: R-123 | "PO1; PO2; PO3" | "J-12345; J-6780" Sure thing Adam! Here is a query that returns multiple rows. I have to use outer joins, since not all Requisitions have RequisitionItems that are assigned to Jobs and/or IssuedPOs (in that case their fkey IDs would just be null of course). SELECT DISTINCT Requisition.Number, IssuedPO.Number, Job.Number FROM Requisition INNER JOIN RequisitionItem on RequisitionItem.RequisitionID = Requisition.ID LEFT OUTER JOIN Job_Activity on RequisitionItem.JobActivityID = Job_Activity.ID LEFT OUTER JOIN Job on Job_Activity.JobID = Job.ID LEFT OUTER JOIN IssuedPO on RequisitionItem.IssuedPOID = IssuedPO.ID

    Read the article

  • Avoiding repeated subqueries when 'WITH' is unavailable

    - by EloquentGeek
    MySQL v5.0.58. Tables, with foreign key constraints etc and other non-relevant details omitted for brevity: CREATE TABLE `fixture` ( `id` int(11) NOT NULL auto_increment, `competition_id` int(11) NOT NULL, `name` varchar(50) NOT NULL, `scheduled` datetime default NULL, `played` datetime default NULL, PRIMARY KEY (`id`) ); CREATE TABLE `result` ( `id` int(11) NOT NULL auto_increment, `fixture_id` int(11) NOT NULL, `team_id` int(11) NOT NULL, `score` int(11) NOT NULL, `place` int(11) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `team` ( `id` int(11) NOT NULL auto_increment, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`) ); Where: A draw will set result.place to 0 result.place will otherwise contain an integer representing first place, second place, and so on The task is to return a string describing the most recently played result in a given competition for a given team. The format should be "def Team X,Team Y" if the given team was victorious, "lost to Team X" if the given team lost, and "drew with Team X" if there was a draw. And yes, in theory there could be more than two teams per fixture (though 1 v 1 will be the most common case). This works, but feels really inefficient: SELECT CONCAT( (SELECT CASE `result`.`place` WHEN 0 THEN "drew with" WHEN 1 THEN "def" ELSE "lost to" END FROM `result` WHERE `result`.`fixture_id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `result`.`team_id` = 1), ' ', (SELECT GROUP_CONCAT(`team`.`name`) FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` LEFT JOIN `team` ON `result`.`team_id` = `team`.`id` WHERE `fixture`.`id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `team`.`id` != 1) ) Have I missed something really obvious, or should I simply not try to do this in one query? Or does the current difficulty reflect a poor table design?

    Read the article

  • Help converting subquery to query with joins

    - by Tim
    I'm stuck on a query with a join. The client's site is running mysql4, so a subquery isn't an option. My attempts to rewrite using a join aren't going too well. I need to select all of the contractors listed in the contractors table who are not in the contractors2label table with a given label ID & county ID. Yet, they might be listed in contractors2label with other label and county IDs. Table: contractors cID (primary, autonumber) company (varchar) ...etc... Table: contractors2label cID labelID countyID psID This query with a subquery works: SELECT company, contractors.cID FROM contractors WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors.cID NOT IN ( SELECT contractors2label.cID FROM contractors2label WHERE labelID <> 1 AND countyID <> 1 ) I thought this query with a join would be the equivalent, but it returns no results. A manual scan of the data shows I should get 34 rows, which is what the subquery above returns. SELECT company, contractors.cID FROM contractors LEFT OUTER JOIN contractors2label ON contractors.cID = contractors2label.cID WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors2label.labelID <> 1 AND contractors2label.countyID <> 1 AND contractors2label.cID IS NULL

    Read the article

  • help with delete where not in query

    - by kralco626
    I have a lookup table (##lookup). I know it's bad design because I'm duplicating data, but it speeds up my queries tremendously. I have a query that populates this table insert into ##lookup select distinct col1,col2,... from table1...join...etc... I would like to simulate this behavior: delete from ##lookup insert into ##lookup select distinct col1,col2,... from table1...join...etc... This would clearly update the table correctly. But this is a lot of inserting and deleting. It messes with my indexes and locks up the table for selecting from. This table could also be updated by something like: delete from ##lookup where not in (select distinct col1,col2,... from table1...join...etc...) insert into ##lookup (select distinct col1,col2,... from table1...join...etc...) except if it is already in the table The second way may take longer, but I can say "with no lock" and I will be able to select from the table. Any ideas on how to write the query the second way?

    Read the article

  • How do I compare 2 fields and return the lowest value of each record?

    - by BigRob
    I'm slowly learning access to make a database of products and suppliers for my parents' business. What i've got is a table of products indexed by our product reference and 2 more tables for 2 different suppliers that contains the suppliers product reference and price that links with our reference. I've made a query that performs a left outer join such that it returns a table of our products with each supplier's reference and price, i.e: Ref | Product Name | Supplier 1 Ref | Supplier 1 Price | Supplier 2 Ref | Supplier 2 Price Here's the query I used: SELECT Catalog.Ref, Catalog.[Product Name], Catalog.Price, [D Products].[Supplier Ref], [D Products].Cost, [GS Products].[Supplier Ref], [GS Products].Cost FROM ([Catalog] LEFT JOIN [D Products] ON Catalog.Ref = [D Products].Ref) LEFT JOIN [GS Products] ON Catalog.Ref = [GS Products].Ref; Not all products are available from both suppliers, hence the outer join. What I want to do (with a query?) is to take the table produced by the query above and simply show the product reference, cheapest supplier reference and cheapest supplier price, i.e: Ref | Cheapest Suppplier Ref | Cheapest Supplier Price Unfortunately my SQL knowledge isn't quite good enough to figure this out, but if anyone can help i'd really appreciate it. Thanks, Rob

    Read the article

  • Duplicate information from sql result

    - by puddleJumper
    I looked in about 18 other posts on here an most people are asking how to delete the records not just hide them. So my problem: I have a database with staff members who are associated with locations. Many of the staff members are associated with more than one location. What I want to do is to only display the first location listed in the mysql result and skip over the others. I have the sql query linking the tables together and it works aside from it showing the same information for those staff members that are in those other locations multiple times so example would be like this: This is the sql statement I have currently SELECT staff_tbl.staffID, staff_tbl.firstName, staff_tbl.middleInitial, staff_tbl.lastName, location_tbl.locationID, location_tbl.staffID, officelocations_tbl.locationID, officelocations_tbl.officeName, staff_title_tbl.title_ID, staff_title_tbl.staff_ID, titles_tbl.titleID, titles_tbl.titleName FROM staff_tbl INNER JOIN location_tbl ON location_tbl.staffID = staff_tbl.staffID INNER JOIN officelocations_tbl ON location_tbl.locationID = officelocations_tbl.locationID INNER JOIN staff_title_tbl ON staff_title_tbl.staff_ID = staff_tbl.staffID INNER JOIN titles_tbl ON staff_title_tbl.title_ID = titles_tbl.titleID and my php is <?php do { ?> <tr> <td><?php echo $row_rs_Staff_Info['firstName']; ?>&nbsp; <?php echo $row_rs_Staff_Info['lastName']; ?></td> <td><?php echo $row_rs_Staff_Info['titleName']; ?>&nbsp; </td> <td><?php echo $row_rs_Staff_Info['officeName']; ?>&nbsp; </td> </tr> <?php } while ($row_mysqlResult = mysql_fetch_assoc($rs_mysqlResult)); ?> What I would like to know is there a way using php to select only the first entry listed for each person and display that and just skip over the other two. I was thinking it could be done by possibly adding the staffID's to an array and if they are in there to skip over the next one listed in the staff_title_tbl but wasn't quite sure how to write it that way. Any help would be great thank you in advance.

    Read the article

  • ISA Server 2006 "Global denied packets rate limit"

    - by lofi42
    Does someone know how to change the "Global denied packets rate limit" on a ISA Server 2006 (SP1) on Windows 2003? We have a strange software which does mutiple sql querys and reaches this limit and the ISA server blocks the traffic. The Floodprotection Option is already disabled on the ISA. SQLDB <= ISA <= SQL-Client

    Read the article

  • MySQL create stored procedure fails but all internal queries succeed alone?

    - by Mark
    Hi all, I just created a simple database in MySQL, and I am learning how to write stored proc's. I'm familiar with M$SQL and as far as I can see the following should work: use mydb; -- -------------------------------------------------------------------------------- -- Routine DDL -- -------------------------------------------------------------------------------- DELIMITER // CREATE PROCEDURE mydb.doStats () BEGIN CREATE TABLE IF NOT EXISTS resultprobability ( ballNumber INT NOT NULL , probability FLOAT NULL, PRIMARY KEY (ballNumber) ); CREATE TABLE IF NOT EXISTS drawProbability ( drawDate DATE NOT NULL , ball1 INT NULL , ball2 INT NULL , ball3 INT NULL , ball4 INT NULL , ball5 INT NULL , ball6 INT NULL , ball7 INT NULL , score FLOAT NULL , PRIMARY KEY (drawDate) ); TRUNCATE TABLE resultprobability; TRUNCATE TABLE drawprobability; INSERT INTO resultprobability (ballNumber, probability) (select resultset.ballNumber ballNumber,(count(0)/(select count(0) from resultset)) probability from resultset group by resultset.ballNumber); INSERT INTO drawProbability (drawDate, ball1, ball2, ball3, ball4, ball5, ball6, ball7, score) (select distinct r.drawDate, a.ballnumber ball1, b.ballnumber ball2, c.ballnumber ball3, d.ballnumber ball4, e.ballnumber ball5, f.ballnumber ball6,g.ballnumber ball7, ((a.probability + b.probability + c.probability + d.probability + e.probability + f.probability + g.probability)/7) score from resultset r inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 1) a on a.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 2) b on b.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 3) c on c.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 4) d on d.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 5) e on e.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 6) f on f.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 7) g on g.drawdate = r.drawDate order by score desc); END // DELIMITER ; instead i get the following Executed successfully in 0.002 s, 0 rows affected. Line 1, column 1 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 26 Line 6, column 1 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')) probability from resultset group by resultset.ballNumber); INSERT INTO d' at line 1 Line 31, column 51 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') score from resultset r inner join (select r.drawDate, r.ballNumber, p.probabi' at line 1 Line 39, column 114 Execution finished after 0.002 s, 3 error(s) occurred. What am I doing wrong? I seem to have exhausted my limited mental abilities!

    Read the article

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • Details of 5GB and 50GB SQL Azure databases have now been released, along with new price points

    - by Eric Nelson
    Like many others signed up to the Windows Azure Platform, I received an email overnight detailing the upcoming database size changes for SQL Azure. I know from our work with early adopters over the last 12 months that the 1GB and 10GB limits were sometimes seen as blockers, especially when migrating existing application to SQL Azure. On June 28th 2010, we will be increasing the size limits: SQL Azure Web Edition database from 1 GB to 5 GB SQL Azure Business Edition database will go from 10 GB to 50 GB Along with these changes comes new price points, including the option to increase in increments of 10GB: Web Edition: Up to 1 GB relational database = $9.99 / month Up to 5 GB relational database = $49.95 / month Business Edition: Up to 10 GB relational database = $99.99 / month Up to 20 GB relational database = $199.98 / month Up to 30 GB relational database = $299.97 / month Up to 40 GB relational database = $399.96 / month Up to 50 GB relational database = $499.95 / month Check out the full SQL Azure pricing. Related Links: http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • SQuirrelSQL redirect ouput to a file?

    - by Oscar Reyes
    Does anybody knows if SQuirrel SQL client may output the of several SQL commands to a single result window in textplain or to a file ( as SQL+ would ) So I can: select * from dual; select * from dual; And have both results in a single "ready to" Ctrl-C format?

    Read the article

  • MySQL Stored Procedure

    - by xdevel2000
    I must convert some stored procedures from MS Sql Server to MySQL and in Sql Server I have these two variables: @@ERROR for a server error and @@IDENTITY for the last insert id are there MySql similar global variables?

    Read the article

  • multiple VM vs multiple named instance

    - by thushya
    Hi , I am looking for some comparison or data for sql 2008 deployment , what are the advantages and disadvantages installing multiple VM vs multiple named instance ? How can i save license cost using VMs vs physical server for sql 2008 ? is there a way to find out what is maximum number of connections to database at any time or in the past - need to calculate needed CAL license ? Thanks.

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >