Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 286/1102 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Dynamic where clause using Linq to SQL in a join query in a MVC application

    - by jhoefnagels
    Dear .Net Linq experts, I am looking for a way to query for products in a catalog using filters on properties which have been assigned to the product based on the category to which the product belongs. So I have the following entities involved: Products -Id -CategoryId Categories [Id] Properties [Id, CategoryId] PropertyValues [Id, PropertyId] ProductProperties [ProductId, PropertyValueId] When I ad a product to the catalog, multiple ProductProperties will be added based on the category and I would like to be able to filter all products from a category by selecting values for one or more properties. I will gather all filters, which I will hold in a list, by reading the URL. Now it is time to actually get the products based on multiple properties and I have been trying to find the right strategy but untill now it does not really work. Is there a way to make this work without writing SQL? I was trying something like this: productsInCategory = ProductRepository.Where(p => p.Category.Name == category); foreach (PropertyFilter pf in filterList) { productsInCategory = (from product in productsInCategory join pp in ProductPropertyRepository on product.Id equals pp.ProductId where pp.PropertyValueId == pf.ValueId select product); }

    Read the article

  • SQL Compact allow only one WCF Client

    - by Andreas Hoffmann
    Hi, I write a little Chat Application. To save some infos like Username and Password I store the Data in an SQL-Compact 3.5 SP1 Database. Everything working fine, but If another (the same .exe on the same machine) Client want to access the Service. It came an EndpointNotFound exception, from the ServiceReference.Class.Open() at the Second Client. So i remove the CE Data Access Code and I get no Error (with an if (false)) Where is the Problem? I googled for this, but no one seems the same error I get :( SOLUTION I used the wrapper in http://csharponphone.blogspot.com/2007/01/keeping-sqlceconnection-open-and-thread.html for threat safty, and now it works :) Client Code: public test() { var newCompositeType = new Client.ServiceReference1.CompositeType(); newCompositeType.StringValue = "Hallo" + DateTime.Now.ToLongTimeString(); newCompositeType.Save = (Console.ReadKey().Key == ConsoleKey.J); ServiceReference1.Service1Client sc = new Client.ServiceReference1.Service1Client(); sc.Open(); Console.WriteLine("Save " + newCompositeType.StringValue); sc.GetDataUsingDataContract(newCompositeType); sc.Close(); } Server Code public CompositeType GetDataUsingDataContract(CompositeType composite) { if (composite.Save) { SqlCeConnection con = new SqlCeConnection(Properties.Settings.Default.Con); con.Open(); var com = con.CreateCommand(); com.CommandText = "SELECT * FROM TEST"; SqlCeResultSet result = com.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Updatable); var rec = result.CreateRecord(); rec["TextField"] = composite.StringValue; result.Insert(rec); result.Close(); result.Dispose(); com.Dispose(); con.Close(); con.Dispose(); } return composite; }

    Read the article

  • refactor LINQ TO SQL custom properties that instantiate datacontext

    - by Thiago Silva
    I am working on an existing ASP.NET MVC app that started small and has grown with time to require a good re-architecture and refactoring. One thing that I am struggling with is that we've got partial classes of the L2S entities so we could add some extra properties, but these props create a new data context and query the DB for a subset of data. This would be the equivalent to doing the following in SQL, which is not a very good way to write this query as oppsed to joins: SELECT tbl1.stuff, (SELECT nestedValue FROM tbl2 WHERE tbl2.Foo = tbl1.Bar), tbl1.moreStuff FROM tbl1 so in short here's what we've got in some of our partial entity classes: public partial class Ticket { public StatusUpdate LastStatusUpdate { get { //this static method call returns a new DataContext but needs to be refactored var ctx = OurDataContext.GetContext(); var su = Compiled_Query_GetLastUpdate(ctx, this.TicketId); return su; } } } We've got some functions that create a compiled query, but the issue is that we also have some DataLoadOptions defined in the DataContext, and because we instantiate a new datacontext for getting these nested property, we get an exception "Compiled Queries across DataContexts with different LoadOptions not supported" . The first DataContext is coming from a DataContextFactory that we implemented with the refactorings, but this second one is just hanging off the entity property getter. We're implementing the Repository pattern in the refactoring process, so we must stop doing stuff like the above. Does anyone know of a good way to address this issue?

    Read the article

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • SQL deadlock on delete then bulk insert

    - by StarLite
    I have an issue with a deadlock in SQL Server that I haven't been able to resolve. Basically I have a large number of concurrent connections (from many machines) that are executing transactions where they first delete a range of entries and then re-insert entries within the same range with a bulk insert. Essentially, the transaction looks like this BEGIN TRANSACTION T1 DELETE FROM [TableName] WITH( XLOCK HOLDLOCK ) WHERE [Id]=@Id AND [SubId]=@SubId INSERT BULK [TableName] ( [Id] Int , [SubId] Int , [Text] VarChar(max) COLLATE SQL_Latin1_General_CP1_CI_AS ) WITH(CHECK_CONSTRAINTS, FIRE_TRIGGERS) COMMIT TRANSACTION T1 The bulk insert only inserts items matching the Id and SubId of the deletion in the same transaction. Furthermore, these Id and SubId entries should never overlap. When I have enough concurrent transaction of this form, I start to see a significant number of deadlocks between these statements. I added the locking hints XLOCK HOLDLOCK to attempt to deal with the issue, but they don't seem to be helpling. The canonical deadlock graph for this error shows: Connection 1: Holds RangeX-X on PK_TableName Holds IX Page lock on the table Requesting X Page lock on the table Connection 2: Holds IX Page lock on the table Requests RangeX-X lock on the table What do I need to do in order to ensure that these deadlocks don't occur. I have been doing some reading on the RangeX-X locks and I'm not sure I fully understand what is going on with these. Do I have any options short of locking the entire table here?

    Read the article

  • Help me with DB design

    - by eugeneK
    Hi, i'm developing text ads system. Some small clone of Google Ads. Here is diagram with common tables. Basically make it short, advertiser can have up to 10 variant of same campaign with different text variations, can geo-target his ads and unique impressions count only for IP that haven't been on certain site for more than 24 hours. Pretty simple but the question is what i lack in here from your experience because later it would much harder to fix design flaws and some of you probably done something alike also many SQL gurus in here so maybe i did over normalized DB or did not normalized as needed ? Second question is. My end goal is to get ads for user from ie. Germany that haven't seen same ad on same site for 24 hours as long as ads fit country of user. Each impression is count same as each click if there is one. I need to get 5 "random" ads based on IP, Country and higher CPC (pay per click). How can i achieve this with current design or maybe to design database the way it would be easy to get ads and show stats for advirtisers... thanks for any help...

    Read the article

  • LINQ-To-SQL and Mapping Table Deletions

    - by Jake
    I have a many-to-many relationship between two tables, let's say Friends and Foods. If a friend likes a food I stick a row into the FriendsFoods table, like this: ID Friend Food 1 'Tom' 'Pizza' FriendsFoods has a Primary Key 'ID', and two non-null foreign keys 'Friend' and 'Food' to the 'Friends' and 'Foods' tables, respectively. Now suppose I have a Friend tom .NET object corresponding to 'Tom', and Tom no longer likes pizza (what is wrong with him?) FriendsFoods ff = tblFriendsFoods.Where(x => x.Friend.Name == 'Tom' && x.Food.Name == 'Pizza').Single(); tom.FriendsFoods.Remove(ff); pizza.FriendsFoods.Remove(ff); If I try to SubmitChanges() on the DataContext, I get an exception because it attempts to insert a null into the Friend and Food columns in the FriendsFoods table. I'm sure I can put together some kind of convoluted logic to track changes to the FriendsFoods table, intercept SubmitChanges() calls, etc to try and get this to work the way I want, but is there a nice, clean way to remove a Many-To-Many relationship with LINQ-To-SQL?

    Read the article

  • Comparing RPG to C# and SQL.

    - by Kevin
    In an RPG program (One of IBM's languages on the AS/400) I can "chain" out to a file to see if a record (say, a certain customer record) exists in the file. If it does, then I can update that record instantly with new data. If the record doesn't exist, I can write a new record. The code would look like this: Customer Chain CustFile 71 ;turn on indicator 71 if not found if *in71 ;if 71 is "on" eval CustID = Customer; eval CustCredit = 10000; write CustRecord else ;71 not on, record found. CustCredit = 10000; update CustRecord endif Not being real familiar with SQL/C#, I'm wondering if there is a way to do a random retrieval from a file (which is what "chain" does in RPG). Basically I want to see if a record exists. If it does, update the record with some new information. If it does not, then I want to write a new record. I'm sure it's possible, but not quite sure how to go about doing it. Any advice would be greatly appreciated.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • sql server 2005 stored procedure unexpected behaviour

    - by user283405
    i have written a simple stored procedure (run as job) that checks user subscribe keyword alerts. when article posted the stored procedure sends email to those users if the subscribed keyword matched with article title. One section of my stored procedure is: OPEN @getInputBuffer FETCH NEXT FROM @getInputBuffer INTO @String WHILE @@FETCH_STATUS = 0 BEGIN --PRINT @String INSERT INTO #Temp(ArticleID,UserID) SELECT A.ID,@UserID FROM CONTAINSTABLE(Question,(Text),@String) QQ JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key] WHERE A.ID > @ArticleID FETCH NEXT FROM @getInputBuffer INTO @String END CLOSE @getInputBuffer DEALLOCATE @getInputBuffer This job run every 5 minute and it checks last 50 articles. It was working fine for last 3 months but a week before it behaved unexpectedly. The problem is that it sends irrelevant results. The @String contains user alert keyword and it matches to the latest articles using Full text search. The normal execution time is 3 minutes but its execution time is 3 days (in problem). Now the current status is its working fine but we are unable to find any reason why it sent irrelevant results. Note: I am already removing noise words from user alert keyword. I am using SQL Server 2005 Enterprise Edition.

    Read the article

  • SQL Joining Two or More from Table B with Common Data in Table A

    - by Matthew Frederick
    The real-world situation is a series of events that each have two or more participants (like sports teams, though there can be more than two in an event), only one of which is the host of the event. There is an Event db table for each unique event and a Participant db table with unique participants. They are joined together using a Matchup table. They look like this: Event EventID (PK) (other event data like the date, etc.) Participant ParticipantID (PK) Name Matchup EventID (FK to Event table) ParicipantID (FK to Participant) Host (1 or 0, only 1 host = 1 per EventID) What I'd like to get as a result is something like this: EventID ParticipantID where host = 1 Participant.Name where host = 1 ParticipantID where host = 0 Participant.Name where host = 0 ParticipantID where host = 0 Participant.Name where host = 0 ... Where one event has 2 participants and another has 3 participants, for example, the third participant column data would be null or otherwise noticeable, something like (PID = ParticipantID): EventID PID-1(host) Name-1 (host) PID-2 Name-2 PID-3 Name-3 ------- ----------- ------------- ----- ------ ----- ------ 1 7 Lions 8 Tigers 12 Bears 2 11 Dogs 9 Cats NULL NULL I suspect the answer is reasonably straightforward but for some reason I'm not wrapping my head around it. Alternately it's very difficult. :) I'm using MYSQL 5 if that affects the available SQL.

    Read the article

  • Optimize an SQL statement

    - by kovshenin
    Hey, I'm running WordPress, the database diagram could be found here: http://codex.wordpress.org/Database_Description After doing tonnes of filters and applying some hooks to the core, I'm left with the following query: SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts JOIN wp_postmeta ppmeta_beds ON (ppmeta_beds.post_id = wp_posts.ID AND ppmeta_beds.meta_key = 'pp-general-beds' AND ppmeta_beds.meta_value >= 2) JOIN wp_postmeta ppmeta_baths ON (ppmeta_baths.post_id = wp_posts.ID AND ppmeta_baths.meta_key = 'pp-general-baths' AND ppmeta_baths.meta_value >= 3) JOIN wp_postmeta ppmeta_furnished ON (ppmeta_furnished.post_id = wp_posts.ID AND ppmeta_furnished.meta_key = 'pp-general-furnished' AND ppmeta_furnished.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool ON (ppmeta_pool.post_id = wp_posts.ID AND ppmeta_pool.meta_key = 'pp-facilities-pool' AND ppmeta_pool.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool_type ON (ppmeta_pool_type.post_id = wp_posts.ID AND ppmeta_pool_type.meta_key = 'pp-facilities-pool-type' AND ppmeta_pool_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_sport ON (ppmeta_sport.post_id = wp_posts.ID AND ppmeta_sport.meta_key = 'pp-facilities-sport' AND ppmeta_sport.meta_value = 'yes') JOIN wp_postmeta ppmeta_sport_type ON (ppmeta_sport_type.post_id = wp_posts.ID AND ppmeta_sport_type.meta_key = 'pp-facilities-sport-type' AND ppmeta_sport_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_parking ON (ppmeta_parking.post_id = wp_posts.ID AND ppmeta_parking.meta_key = 'pp-facilities-parking' AND ppmeta_parking.meta_value = 'yes') JOIN wp_postmeta ppmeta_parking_type ON (ppmeta_parking_type.post_id = wp_posts.ID AND ppmeta_parking_type.meta_key = 'pp-facilities-parking-type' AND ppmeta_parking_type.meta_value IN ('street', 'off-street', 'garage')) JOIN wp_postmeta ppmeta_garden ON (ppmeta_garden.post_id = wp_posts.ID AND ppmeta_garden.meta_key = 'pp-facilities-garden' AND ppmeta_garden.meta_value = 'yes') JOIN wp_postmeta ppmeta_garden_type ON (ppmeta_garden_type.post_id = wp_posts.ID AND ppmeta_garden_type.meta_key = 'pp-facilities-garden-type' AND ppmeta_garden_type.meta_value IN ('private', 'communal')) JOIN wp_postmeta ppmeta_type ON (ppmeta_type.post_id = wp_posts.ID AND ppmeta_type.meta_key = 'pp-general-type' AND ppmeta_type.meta_value IN ('villa', 'apartment', 'penthouse')) JOIN wp_postmeta ppmeta_status ON (ppmeta_status.post_id = wp_posts.ID AND ppmeta_status.meta_key = 'pp-general-status' AND ppmeta_status.meta_value IN ('off-plan', 'resale')) JOIN wp_postmeta ppmeta_location_type ON (ppmeta_location_type.post_id = wp_posts.ID AND ppmeta_location_type.meta_key = 'pp-location-type' AND ppmeta_location_type.meta_value IN ('beachfront', 'countryside', 'town-center', 'near-the-sea', 'hillside', 'private-resort')) JOIN wp_postmeta ppmeta_price_range ON (ppmeta_price_range.post_id = wp_posts.ID AND ppmeta_price_range.meta_key = 'pp-general-price' AND ppmeta_price_range.meta_value BETWEEN 10000 AND 50000) JOIN wp_postmeta ppmeta_area_range ON (ppmeta_area_range.post_id = wp_posts.ID AND ppmeta_area_range.meta_key = 'pp-general-area' AND ppmeta_area_range.meta_value BETWEEN 50 AND 150) WHERE 1=1 AND (((wp_posts.post_title LIKE '%fdsfsad%') OR (wp_posts.post_content LIKE '%fdsfsad%'))) AND wp_posts.post_type = 'property' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 10 It's way too big. Could anybody please show me a way of optimizing all those joins into fewer statements? As you can see they all use the same tables but under different names. I'm not an SQL guru but I think there should be a way, because this is insane ;) Thanks! Update Here's what explain returns: http://twitpic.com/1cd36p

    Read the article

  • Updating a Foreign Key constraint with ON DELETE CASCADE not updating?

    - by Alastair Pitts
    We've realised in our SQL Server 2005 DB that some Foreign Keys don't have the On Delete Cascade property set, which is giving us a couple of referential errors when we try and delete some records. Use the Management Studio I scripted the DROP and CREATESQL's, but it seems that the CREATE isn't working correctly. The DROP: USE [FootprintReports] GO IF EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'[dbo].[FK__SUBSCRIPTIONS_Reports]') AND parent_object_id = OBJECT_ID(N'[dbo].[_SUBSCRIPTIONS]')) ALTER TABLE [dbo].[_SUBSCRIPTIONS] DROP CONSTRAINT [FK__SUBSCRIPTIONS_Reports] and the CREATE USE [FootprintReports] GO ALTER TABLE [dbo].[_SUBSCRIPTIONS] WITH CHECK ADD CONSTRAINT [FK__SUBSCRIPTIONS_Reports] FOREIGN KEY([PARAMETER_ReportID]) REFERENCES [dbo].[Reports] ([ID]) ON DELETE CASCADE GO ALTER TABLE [dbo].[_SUBSCRIPTIONS] CHECK CONSTRAINT [FK__SUBSCRIPTIONS_Reports] If I manually change the value of the On Delete in the GUI, after dropping and recreating, the On Delete isn't correctly updated. As a test, I set the Delete rule in the GUI to Set Null. It dropped correctly, and recreated without error. If I got back into the GUI, it is still showing the Set Null as the Delete Rule. Have I done something wrong? or is there another way to edit a constraint to add the ON DELETE CASCADE rule?

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • i want to search in sql server with must have parameter in one colunm

    - by sherif4csharp
    hello i am usning c# and ms SQL server 2008 i have table like this id | propertyTypeId | FinishingQualityId | title | Description | features 1 1 2 prop1 propDEsc1 1,3,5,7 2 2 3 prop2 propDEsc2 1,3 3 6 5 prop3 propDEsc3 1 4 5 4 prop4 propDEsc4 3,5 5 4 6 prop5 propDEsc5 5,7 6 4 6 prop6 propDEsc6 and here is my stored code (search in the same table) create stored procdures propertySearch as @Id int = null, @pageSize float , @pageIndex int, @totalpageCount int output, @title nvarchar(150) =null , @propertyTypeid int = null , @finishingqualityid int = null , @features nvarchar(max) = null , -- this parameter send like 1,3 ( for example) begin select row_number () as TempId over( order by id) , id,title,description,propertyTypeId,propertyType.name,finishingQualityId,finishingQuality.Name,freatures into #TempTable from property join propertyType on propertyType.id= property.propertyTypeId join finishingQuality on finishingQuality.id = property.finishingQualityId where property.id = isnull(@id,property.id ) and proprty.PropertyTypeId= isnull(@propertyTypeid,property.propertyTypeId) select totalpageconunt = ((select count(*) from #TempTable )/@pageSize ) select * from #TempTable where tempid between (@pageindex-1)*@pagesize +1 and (@pageindex*@pageSize) end go i can't here filter the table by feature i sent. this table has to many rows i want to add to this stored code to filter data for example when i send 1,3 in features parameter i want to return row number one and two in the example table i write in this post (want to get the data from table must have the feature I send) many thanks for every one helped me and will help

    Read the article

  • Best approach, Dynamic OpenXML in T-SQL

    - by Martin Ongtangco
    hello, i'm storing XML values to an entry in my database. Originally, i extract the xml datatype to my business logic then fill the XML data into a DataSet. I want to improve this process by loading the XML right into the T-SQL. Instead of getting the xml as string then converting it on the BL. My issue is this: each xml entry is dynamic, meaning it can be any column created by the user. I tried using this approach, but it's giving me an error: CREATE PROCEDURE spXMLtoDataSet @id uniqueidentifier, @columns varchar(max) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @name varchar(300); DECLARE @i int; DECLARE @xmlData xml; (SELECT @xmlData = data, @name = name FROM XmlTABLES WHERE (tableID = ISNULL(@id, tableID))); EXEC sp_xml_preparedocument @i OUTPUT, @xmlData DECLARE @tag varchar(1000); SET @tag = '/NewDataSet/' + @name; DECLARE @statement varchar(max) SET @statement = 'SELECT * FROM OpenXML(@i, @tag, 2) WITH (' + @columns + ')'; EXEC (@statement); EXEC sp_xml_removedocument @i END where i pass a dynamically written @columns. For example: spXMLtoDataSet 'bda32dd7-0439-4f97-bc96-50cdacbb1518', 'ID int, TypeOfAccident int, Major bit, Number_of_Persons int, Notes varchar(max)' but it kept on throwing me this exception: Msg 137, Level 15, State 2, Line 1 Must declare the scalar variable "@i". Msg 319, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.

    Read the article

  • Database not updating after UPDATE SQL statement in ASP.net

    - by Ronnie
    I currently have a problem attepting to update a record within my database. I have a webpage that displays in text boxes a users details, these details are taken from the session upon login. The aim is to update the details when the user overwrites the current text in the text boxes. I have a function that runs when the user clicks the 'Save Details' button and it appears to work, as i have tested for number of rows affected and it outputs 1. However, when checking the database, the record has not been updated and I am unsure as to why. I've have checked the SQL statement that is being processed by displaying it as a label and it looks as so: UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, [promo]=@promo WHERE ([users].[user_id] = 16) The function and other relevant code is: Sub Button1_Click(sender As Object, e As EventArgs) changeDetails(emailBox.text, firstBox.text, lastBox.text, promoBox.text) End Sub Function changeDetails(ByVal email As String, ByVal firstname As String, ByVal lastname As String, ByVal promo As String) As Integer Dim connectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0; Ole DB Services=-4; Data Source=C:\Documents an"& _ "d Settings\Paul Jarratt\My Documents\ticketoffice\datab\ticketoffice.mdb" Dim dbConnection As System.Data.IDbConnection = New System.Data.OleDb.OleDbConnection(connectionString) Dim queryString As String = "UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, "& _ "[promo]=@promo WHERE ([users].[user_id] = " + session.contents.item("ID") + ")" Dim dbCommand As System.Data.IDbCommand = New System.Data.OleDb.OleDbCommand dbCommand.CommandText = queryString dbCommand.Connection = dbConnection Dim dbParam_email As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_email.ParameterName = "@email" dbParam_email.Value = email dbParam_email.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_email) Dim dbParam_firstname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_firstname.ParameterName = "@firstname" dbParam_firstname.Value = firstname dbParam_firstname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_firstname) Dim dbParam_lastname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_lastname.ParameterName = "@lastname" dbParam_lastname.Value = lastname dbParam_lastname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_lastname) Dim dbParam_promo As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_promo.ParameterName = "@promo" dbParam_promo.Value = promo dbParam_promo.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_promo) Dim rowsAffected As Integer = 0 dbConnection.Open Try rowsAffected = dbCommand.ExecuteNonQuery Finally dbConnection.Close End Try labelTest.text = rowsAffected.ToString() if rowsAffected = 1 then labelSuccess.text = "* Your details have been updated and saved" else labelError.text = "* Your details could not be updated" end if End Function Any help would be greatly appreciated.

    Read the article

  • using a JOIN in an UPDATE in SQL

    - by SDLFunTimes
    Hi, I'm having trouble formulating a legal statement to double the statuses of the suppliers (s) who have shipped (sp) more than 500 units. I've been trying: update s set s.status = s.status * 2 from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500; however I'm getting this error from Mysql: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500' at line 1 Does anyone have any ideas about what is wrong with this query? Here's my schema: create table s ( sno char(5) not null, sname char(20) not null, status smallint, city char(15), primary key (sno) ); create table p ( pno char(6) not null, pname char(20) not null, color char(6), weight smallint, city char(15), primary key (pno) ); create table sp ( sno char(5) not null, pno char(6) not null, qty integer not null, primary key (sno, pno) );

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • LINQ to SQL and DataPager

    - by Jonathan S.
    I'm using LINQ to SQL to search a fairly large database and am unsure of the best approach to perform paging with a DataPager. I am aware of the Skip() and Take() methods and have those working properly. However, I'm unable to use the count of the results for the datapager, as they will always be the page size as determined in the Take() method. For example: var result = (from c in db.Customers where c.FirstName == "JimBob" select c).Skip(0).Take(10); This query will always return 10 or fewer results, even if there are 1000 JimBobs. As a result, the DataPager will always think there's a single page, and users aren't able to navigate across the entire result set. I've seen one online article where the author just wrote another query to get the total count and called that. Something like: int resultCount = (from c in db.Customers where c.FirstName == "JimBob" select c).Count(); and used that value for the DataPager. But I'd really rather not have to copy and paste every query into a separate call where I want to page the results for obvious reasons. Is there an easier way to do this that can be reused across multiple queries? Thanks.

    Read the article

  • Speed up SQL Server Fulltext Index through Text Duplication of Non-Indexed Columns

    - by Alex
    1) I have the text fields FirstName, LastName, and City. They are fulltext indexed. 2) I also have the FK int fields AuthorId and EditorId, not fulltext indexed. A search on FirstName = 'abc' AND AuthorId = 1 will first search the entire fulltext index for 'abc', and then narrow the resultset for AuthorId = 1. This is bad because it is a huge waste of resources as the fulltext search will be performed on many records that won't be applicable. Unfortunately, to my knowledge, this can't be turned around (narrow by AuthorId first and then fulltext-search the subset that matches) because the FTS process is separate from SQL Server. Now my proposed solution that I seek feedback on: Does it make sense to create another computed column which will be included in the fulltext search which will identify the Author as text (e.g. AUTHORONE). That way I could get rid of the AuthorId restriction, and instead make it part of my fulltext search (a search for 'abc' would be 'abc' and 'AUTHORONE' - all executed as part of the fulltext search). Is this a good idea or not? Why?

    Read the article

  • Remove undesired indexed keywords from Sql Server FTS Index

    - by Scott
    Could anyone tell me if SQL Server 2008 has a way to prevent keywords from being indexed that aren't really relevant to the types of searches that will be performed? For example, we have the IFilters for PDF and Word hooked in and our documents are being indexed properly as far as I can tell. These documents, however, have lots of numeric values in them that people won't really be searching for or bring back meaningful results. These are still being indexed and creating lots of entries in the full text catalog. Basically we are trying to optimize our search engine in any way we can and assumed all these unnecessary entries couldn't be helping performance. I want my catalog to consist of alphabetic keywords only. The current iFilters work better than I would be able to write in the time I have but it just has more than I need. This is an example of some of the terms from sys.dm_fts_index_keywords_by_document that I want out: $1,000, $100, $250, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 129, 13.1, 14, 14.12, 145, 15, 16.2, 16.4, 18, 18.1, 18.2, 18.3, 18.4, 18.5 These are some examples from the same management view that I think are desirable for keeping and searching on: above, accordingly, accounts, add, addition, additional, additive Any help would be greatly appreciated!

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >