Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 457/4078 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • SQL Inner Join : DB stuck

    - by SurfingCat
    I postet this question a few days ago but I didn't explain exactly what I want. I ask the question better formulated again: To clarify my problem I added some new information: I got an MySQL DB with MyISAM tables. The two relevant tables are: * orders_products: orders_products_id, orders_id, product_id, product_name, product_price, product_name, product_model, final_price, ... * products: products_id, manufacturers_id, ... (for full information about the tables see screenshot products (Screenshot) and screenshot orders_products (Screenshot)) Now what I want is this: - Get all Orders who ordered products with manufacturers_id = 1. And the product name of the product of this order (with manufacturers_id = 1). Grouped by orders. What I did so far is this: SELECT op.orders_id, p.products_id, op.products_name, op.products_price, op.products_quantity FROM orders_products op , products p INNER JOIN products ON op.products_id = p.products_id WHERE p.manufacturers_id = 1 AND p.orders_id > 10000 p.orders_id 10000 for testing to get only a few order_id's. But thies query takes much time to get executed if it even works. Two times the sql server stucked. Where is the mistake?

    Read the article

  • Custom Detail in Linq-to-SQL Master-Detail DataGridViews

    - by Andres
    Hi, looking for a way to create Linq-to-SQL Master-Detail WinForms DataGridViews where the Detail part of the Linq query is a custom one. Can do fine this: DataClasses1DataContext db = new DataClasses1DataContext(".\\SQLExpress"); var myQuery = from o in db.Orders select o; dataGridView1.DataSource = new BindingSource() { DataSource = myQuery }; dataGridView2.DataSource = new BindingSource() { DataSource = dataGridView1.DataSource, DataMember = "OrderDetails" }; but I'd like to have the Detail part under my precise control, like var myQuery = from o in db.Orders join od in db.OrderDetails on o.ID equals od.OrderID into MyOwnSubQuery select o; and use it for the second grid: dataGridView2.DataSource = new BindingSource() { DataSource = dataGridView1.DataSource, DataMember = "MyOwnSubQuery" // not working... }; the real reason I want it is a bit more complex (I'd like to have the Detail part to be some not-pre-defined join actually), but I'm hoping the above conveyed the idea. Can I only have the Detail part as a plain sub-table coming out of the pre-defined relation or can I do more complex stuff with the Detail part? Does anyone else feel this is kind of limited (if the first example is the best we can do)? Thanks!

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • Checking server load with PHP and taking appropriate action

    - by teehoo
    Hi, I'm creating a project in which a server receives operations from clients to apply to a local server document. The server and client both share the same document and therefore each message the client sends contains an MD5 hash, which the server compares to after generating its own hash to ensure the server and client documents are synchronized. My question is, if the server is overloaded, could I somehow detect this in PHP, which would in turn let me decide whether I want to execute the hash generation function or not? Perhaps in the scenario defined, this is not a perfect use-case, but I'm interested in this approach in general.

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

  • MS Dynamics CRM users disappear

    - by Max Kosyakov
    Recently we came across quite a weird issue. The administrators say that once in a while they notice that user accounts in MS Dynamics CRM are lost . When a new user is added to the system, the administrators add him/her to the Active Directory first. Then, they go to Dynamics CRM interface, then to system configuration -> administration -> users and add the new user to the CRM, add roles to this user, grant them relevant permissions. Then the user is able to use a custom application, which connects to the Dynamics CRM via WCF. After a while (few weeks or months) the user is unable to use the custom application because Dynamics CRM cannot authorise this user. When administrators open the Dynamics CRM user management interface (configuration -> administration -> users ) and browse through the list of CRM users they cannot find the user in the list. When they try to add the user to Dynamics CRM back, the CRM fails with the error message "User already exists". Moreover, the user still exists in the Active Directory. The admins are very sure the user had been added to the CRM before he/she started to work. The only fact the the user was able to use the custom application normally says that the user had been indeed registered in the CRM. How come the user is not listed in the CRM user management interface at all? Have anyone faced any issues like that? Seen or heard of disappearing CRM users somewhere? Any help is appreciated. Where can one start digging?

    Read the article

  • Multiple queries using same datacontext throws SqlException

    - by Raj
    I've search control with which I'm trying to implement search as user types something. I'm using Linq to SQL to fire queries against the database. Though it works fine usually, when user types the queries really fast some random SqlException is thrown. These are the two different error message I stumbled across recently: A severe error occurred on the current command. The results, if any, should be discarded. Invalid attempt to call Read when reader is closed. Edit: Included code DataContextFactory class: public DataContextFactory(IConnectionStringFactory connectionStringFactory) { this.dataContext = new RegionDataContext(connectionStringFactory.ConnectionString); } public DataContext Context { get { return this.dataContext; } } public void SaveAll() { this.dataContext.SubmitChanges(); } Registering IDataContextFactory with Unity // Get connection string from Application.Current settings ConnectionInfo connectionInfo = Application.Current.Properties["ConnectionInfo"] as ConnectionInfo; // Register ConnectionStringFactory with Unity container as a Singleton this.container.RegisterType<IConnectionStringFactory, ConnectionStringFactory>(new ContainerControlledLifetimeManager(), new InjectionConstructor(connectionInfo.ConnectionString)); // Register DataContextFactory with Unity container this.container.RegisterType<IDataContextFactory, DataContextFactory>(); Connection string: Data Source=.\SQLEXPRESS2008;User Instance=true;Integrated Security=true;AttachDbFilename=C:\client.mdf;MultipleActiveResultSets=true; Using datacontext from a repository class: // IDataContextFactory dependency is injected by Unity public CompanyRepository(IDataContextFactory dataContextFactory) { this.dataContextFactory = dataContextFactory; } // return List<T> of companies var results = this.dataContextFactory.Context.GetTable<CompanyEntity>() .Join(this.dataContextFactory.Context.GetTable<RegionEntity>(), c => c.regioncode, r => r.regioncode, (c, r) => new { c = c, r = r }) .Where(t => t.c.summary_region != null) .Select(t => new { Id = t.c.compcode, Company = t.c.compname, Region = t.r.regionname }).ToList(); What is the work around?

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • SQL Query to duplicate records based on If statement

    - by user328371
    Hi, I'm trying to write an SQL query that will duplicate records depending on a field in another table. I am running mySQL 5. (I know duplicating records shows that the database structure is bad, but I did not design the database and am not in a position to redo it all - it's a shopp ecommerce database running on wordpress.) Each product with a particular attribute needs a link to the same few images, so the product will need a row per image in a table - the database doesn't actually contain the image, just its filename. (the images are of clipart for a customer to select from) Based on these records... SELECT * FROM `wp_shopp_spec` WHERE name='Can Be Personalised' and content='Yes' I want to do something like this.. For each record that matches that query, copy records 5134 - 5139 from wp_shopp_asset but change the id so it's unique and set the cell in column 'parent' to have the value of 'product' from the table wp_shopp_spec. This will mean 6 new records are created for each record matching the above query, all with the same value in 'parent' but with unique ids and every other column copied from the original (ie. records 5134-5139) Hope that's clear enough - any help greatly appreciated.

    Read the article

  • Clone LINQ To SQL object Extension Method throws object dispose exception....

    - by gtas
    Hello all, I have this extension method for cloning my LINQ To SQL objects: public static T CloneObjectGraph<T>(this T obj) where T : class { var serializer = new DataContractSerializer(typeof(T), null, int.MaxValue, false, true, null); using (var ms = new System.IO.MemoryStream()) { serializer.WriteObject(ms, obj); ms.Position = 0; return (T)serializer.ReadObject(ms); } } But while i carry objects with not all references loaded, while qyuerying with DataLoadOptions, sometimes it throws the object disposed exception, but thing is I don't ask for references that is not loaded (null). e.g. I have Customer with many references and i just need to carry on memory the Address reference EntityRef< and i don't Load anything else. But while i clone the object this exception forces me to load all the EntitySet< references with the Customer object, which might be too much and slow down the application speed. Any suggestions?

    Read the article

  • Distribute budget over for ranked components in SQL

    - by Lee
    Assume I have a budget of $10 (any integer) and I want to distribute it over records which have rank field with varying needs. Example: rank Req. Fulfilled? 1 $3 Y 2 $4 Y 3 $2 Y 4 $3 N Those ranks from 1 to 3 should be fulfilled because they are within budget. whereas, the one ranked 4 should not. I want an SQL query to solve that. Below is my initial script: CREATE TABLE budget ( id VARCHAR (32), budget INTEGER, PRIMARY KEY (id)); CREATE TABLE component ( id VARCHAR (32), rank INTEGER, req INTEGER, satisfied BOOLEAN, PRIMARY KEY (id)); INSERT INTO budget (id,budget) VALUES ('1',10); INSERT INTO component (id,rank,req) VALUES ('1',1,3); INSERT INTO component (id,rank,req) VALUES ('2',2,4); INSERT INTO component (id,rank,req) VALUES ('3',3,2); INSERT INTO component (id,rank,req) VALUES ('4',4,3); Thanks in advance for your help. Lee

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Oracle SQL: ROLLUP not summing correctly

    - by tommy-o-dell
    Hi guys, Rollup seems to be working correcly to count the number of units, but not the number of trains. Any idea what could be causing that? The output from the query looks like this. The sum of the Units column in yellow is 53 but the rollup is showing 51. The number of units adds up correctly though... And here's the oracle SQL query... select t.year, t.week, decode(t.mine_id,NULL,'PF',t.mine_id) as mine_id, decode(t.product,Null,'LF',t.product) as product, decode(t.mine_id||'-'||t.product,'-','PF',t.mine_id||'-'||t.product) as code, count(distinct t.tpps_train_id) as trains, count(1) as units from ( select trn.mine_code as mine_id, trn.train_tpps_id as tpps_train_id, round((con.calibrated_weight_total - con.empty_weight_total),2) as tonnes from widsys.train trn INNER JOIN widsys.consist con USING (train_record_id) where trn.direction = 'N' and (con.calibrated_weight_total-con.empty_weight_total) > 10 and trn.num_cars > 10 and con.consist_no not like '_L%' ) w, ( select to_char(td.datetime_act_comp_dump-7/24, 'IYYY') as year, to_char(td.datetime_act_comp_dump-7/24, 'IW') as week, td.mine_code as mine_id, td.train_id as tpps_train_id, pt.product_type_code as product from tpps.train_details td inner join tpps.ore_products op using (ore_product_key) inner join tpps.product_types pt using (product_type_key) where to_char(td.datetime_act_comp_dump-7/24, 'IYYY') = 2010 and to_char(td.datetime_act_comp_dump-7/24, 'IW') = 12 order by td.datetime_act_comp_dump asc ) t where w.mine_id = t.mine_id and w.tpps_train_id = t.tpps_train_id having t.product is not null or t.mine_id is null group by t.year, t.week, rollup( t.mine_id, t.product)

    Read the article

  • SQL Server connection identification

    - by andrew007
    Hi, I have several developers which connect to production and test servers where we have DBs with similar names and structures. In SSMS there are info related to the connection, but sometimes are not properly displayed and/or hidden. I know that it is possible to customize the status bar of each connection in SSMS, but how do you ensure that your developer is connecting to the right server before he runs a query? Is there any way to handle this? THANKS!

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • ASP.net MVC Linq-To-SQL Many-To-Many Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to gracefully handle database insertion for an object that has a many-to-many field that has been set up in a partial class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Post class to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So I can now create a "Create" view that automatically populates a "Categories" UI item. This is where the trouble starts. So here's my question: How do you get automatic object model binding to work cleanly with an object that has a many-to-many relationship to control? The workaround that makes many-to-many relationships possible relies on the Post object having a PostID in order to be associated with CategoryID(s), which is only issued after the Post object has been submitted for validation and insertion. Bit of a Catch22 here. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • MSSQL 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in MSSQL 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • Delaying LINQ to SQL Select Query Execution

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL. In my search method that has some required and some optional parameters, I want to build a LINQ query while testing for the existence of those optional parameters. Here's what I'm currently thinking: using(var db = new DBDataContext()) { IQueryable<Listing> query = null; //Handle required parameter query = db.Listings.Where(l => l.Lat >= form.bounds.extent1.latitude && l.Lat <= form.bounds.extent2.latitude); //Handle optional parameter if (numStars != null) query = query.Where(l => l.Stars == (int)numStars); //Other parameters... //Execute query (does this happen here?) var result = query.ToList(); //Process query... Will this implementation "bundle" the where clauses and then execute the bundled query? If not, how should I implement this feature? Also, is there anything else that I can improve? Thanks in advance.

    Read the article

  • delete row from result set in web sql with javascript

    - by Kaijin
    I understand that the result set from web sql isn't quite an array, more of an object? I'm cycling through a result set and to speed things up I'd like to remove a row once it's been found. I've tried "delete" and "splice", the former does nothing and the latter throws an error. Here's a piece of what I'm trying to do, notice the delete on line 18: function selectFromReverse(reverseRay,suggRay){ var reverseString = reverseRay.toString(); db.transaction(function (tx) { tx.executeSql('SELECT votecount, comboid FROM counterCombos WHERE comboid IN ('+reverseString+') AND votecount>0', [], function(tx, results){ processSelectFromReverse(results,suggRay); }); }, function(){onError}); } function processSelectFromReverse(results,suggRay){ var i = suggRay.length; while(i--){ var j = results.rows.length; while(j--){ console.log('searching'); var found = 0; if(suggRay[i].reverse == results.rows.item(j).comboid){ delete results.rows.item(j); console.log('found'); found++; break; } } if(found == 0){ console.log('lost'); } } }

    Read the article

  • What are the advantages of a query using a derived table(s) over a query not using them?

    - by AspOnMyNet
    I know how derived tables are used, but I still can’t really see any real advantages of using them. For example, in the following article http://techahead.wordpress.com/2007/10/01/sql-derived-tables/ the author tried to show benefits of a query using derived table over a query without one with an example, where we want to generate a report that shows off the total number of orders each customer placed in 1996, and we want this result set to include all customers, including those that didn’t place any orders that year and those that have never placed any orders at all( he’s using Northwind database ). But when I compare the two queries, I fail to see any advantages of a query using a derived table ( if nothing else, use of a derived table doesn't appear to simplify our code, at least not in this example): Regular query: SELECT C.CustomerID, C.CompanyName, COUNT(O.OrderID) AS TotalOrders FROM Customers C LEFT OUTER JOIN Orders O ON C.CustomerID = O.CustomerID AND YEAR(O.OrderDate) = 1996 GROUP BY C.CustomerID, C.CompanyName Query using a derived table: SELECT C.CustomerID, C.CompanyName, COUNT(dOrders.OrderID) AS TotalOrders FROM Customers C LEFT OUTER JOIN (SELECT * FROM Orders WHERE YEAR(Orders.OrderDate) = 1996) AS dOrders ON C.CustomerID = dOrders.CustomerID GROUP BY C.CustomerID, C.CompanyName Perhaps this just wasn’t a good example, so could you show me an example where benefits of derived table are more obvious? thanx

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >