Search Results

Search found 31588 results on 1264 pages for 'linq to sql designer'.

Page 498/1264 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • In SQL how do I get the maximum value for an integer?

    - by CoffeeMonster
    Hi, I am trying to find out the maximum value for an integer (signed or unsigned) from a MySQL database. Is there a way to pull back this information from the database itself? Are there any built-in constants or functions I can use (either standard SQL or MySQL specific). At http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html it lists the values - but is there a way for the database to tell me. The following gives me the MAX_BIGINT - what I'd like is the MAX_INT. SELECT CAST( 99999999999999999999999 AS SIGNED ) as max_int; # max_int | 9223372036854775807 Thanks in advance,

    Read the article

  • SQL Server. Stored procedure to get the biweekly periods

    - by Yada
    I'm currently trying to write a stored procedure that can compute the biweekly periods when a date is passed in as a parameter. The business logic: the first Monday of the year is first day of the biweekly period. For example in 2010: period period_start period_end 1 2010-01-04 2010-01-17 2 2010-01-18 2010-01-31 3 2010-02-01 2010-02-14 .... 26 2010-12-20 2011-01-02 Passing today's date of 2010-12-31 will return 26, 2010-12-20 and 2011-01-02. I'm not too strong in T-SQL. Any help is appreciated. Thanks

    Read the article

  • In SQL Server 2000, how to delete the specified rows in a table that does not have a primary key?

    - by Yousui
    Hi, Let's say we have a table with some data in it. IF OBJECT_ID('dbo.table1') IS NOT NULL BEGIN DROP TABLE dbo.table1; END CREATE TABLE table1 ( DATA INT ); --------------------------------------------------------------------- -- Generating testing data --------------------------------------------------------------------- INSERT INTO dbo.table1(data) SELECT 100 UNION ALL SELECT 200 UNION ALL SELECT NULL UNION ALL SELECT 400 UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT NULL; How to delete the 2nd, 5th, 6th records in the table? The order id defined by the following query. SELECT data FROM dbo.table1 ORDER BY data DESC; Note, this is in SQL Server 2000 environment. Thanks.

    Read the article

  • SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • SQL Server 2005 Reporting Services: How to count rows that are not null? Any hints for calculating t

    - by user329266
    Is there a way to count only records that are not null; similar to "COUNTA" in Excel? I would think this would be very simple process, but nothing I have tried has worked. If necessary, I can try to work this into my SQL query, but the query is already incredibly complicated. Also, I've found very little documentation for how to calculate report totals, and how to total from groups. Would anyone have any recommendations on what to use as a reference?

    Read the article

  • Error in SQL. Can not find it.

    - by kmsboy
    Error in SQL. Can not find it. DECLARE @year VARCHAR (4), @month VARCHAR (2), @day VARCHAR (2), @weekday VARCHAR (2), @hour VARCHAR (2), @archivePath VARCHAR (128), @archiveName VARCHAR (128), @archiveFullName VARCHAR (128) SET @year = CAST(DATEPART(yyyy, GETDATE()) AS VARCHAR) SET @month = CAST(DATEPART(mm, GETDATE()) AS VARCHAR) SET @day = CAST(DATEPART(dd, GETDATE()) AS VARCHAR) SET @weekday = CAST(DATEPART (dw, GETDATE()) AS VARCHAR) SET @hour = CAST(DATEPART (hh, GETDATE()) AS VARCHAR) SET @archivePath = 'd:\1c_new\backupdb\' SET @archiveName = 'TransactionLog_' + @year + '_' + @month + '_' + @day + '_' + @hour + '.bak' SET @archiveFullName = @archivePath + @archiveName BACKUP LOG [xxx] TO DISK = @archiveFullName WITH INIT , NOUNLOAD , NAME = N'?????????? ??? ??????????', SKIP , STATS = 10, DESCRIPTION = N'?????????? ??? ??????????', NOFORMAT

    Read the article

  • Stringly typed values table in sql, is there a better way to do this? (we're using MSSQL)

    - by Jason Hernandez
    We have have a table layout with property names in one table, and values in a second table, and items in a third. (Yes, we're re-implementing tables in SQL.) We join all three to get a value of a property for a specific item. Unfortunately the values can have multiple data types double, varchar, bit, etc. Currently the consensus is to stringly type all the values and store the type name in the column next to the value. tblValues DataTypeName nvarchar Is there a better, cleaner way to do this?

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • How Can I ReWrite flat link to a dynamic link and preserve the Query string?

    - by jeremysawesome
    Hello All, I am wanting to rewrite a url like: http://my.project/mydomain.com/ANY_NUMBER_OF_CATEGORIES/designer/4/designer-name/page.html to this: http://my.projects/mydomain.com/ANY_NUMBER_OF_CATEGORIES/page.html?designer=4 I would like to use mod-rewrite to accomplish this. Things to note: Any number of categories can be between 'mydomain.com/' and '/designer'. For instance the url could be http://my.project/mydomain.com/designer/4/designer-name/page.html or it could be http://my.project/mydomain.com/tops/shirts/small/designer/4/designer-name/page.html A query string may be provided in the original url that needs to be preserved in the rewritten url. For example url provided could be: http://my.project/mydomain.com/designer/4/designer-name/page.html?color=red&type=shirt Given the url above the resulting url would need to be: http://my.projects/mydomain.com/page.html?designer=4&color=red&type=shirt The order of the query string does not matter. The 'designer=4' part could come before or after the rest of the query string. I'm new to .htaccess and re-writes so any examples and or explanations would be greatly appreciated. Thank you very much.

    Read the article

  • Silverlight: After adding table no datacontext classes are available for the new DomainService class

    - by Gabriel
    I'm using a Silverlight 4 WCF RIA Services demo application that uses LinqToSql, that works well. I add a new database table, move the new table to the LinqToSql designer and build the project I add a new DomainService class I get a dialog with the only option to create an empty DomainService and no DataContext classes are available. What am I missed? Thanks in advance Gabriel

    Read the article

  • No Business Data Connectivity Service associated with current web context error

    - by Rob
    I am running on a new dev setup for SharePoint 2010 and trying to setup some External Content types. I think that I have setup BCS correctly (since I see it running in the central administration). When I go into SharePoint designer 2010 and try to setup a new External Content Type, I get the following error: "There is no Business Connectivity Service associated with the current web context." Am I missing something with the configuration or why am I not able to setup a new External Content Type to point to my existing SQL database

    Read the article

  • Just curious, in NetBeans IDE 6.1 how can i get icons for my custom GUI components in the GUI editor

    - by Coder
    This is by no means really important, i was just wondering if the community knows how to put icons for my custom GUI components that show up in the NetBeans GUI designer. What i did was make several Swing components and then i use the menu options to add them to the GUI pallete, but they show up with "?" icons. It would be nice if they showed up with icons similar to swing components such as JButtons, especially for components which are subclassed for Swing components. Thanks!

    Read the article

  • how to configure filter in PPS Dashboard

    - by kumar
    Hi All, I created Tabular value filter and configured to report in dashboard designer but when I preview the report 'This report requires a default or user-defined value for the report parameter 'swcustomer'' How to i resolve this issue. Thank you for your help. Kumar

    Read the article

  • default value support for Entity Framework object construction to avoid having to set this column pa

    - by Greg
    Hi, In entity framework is there a way to have a default value for a column such that Linq to Entity won't require this parameter when constructing a new object? For example I've marked on column in the EF designer with a default value (I typed in "All" as it was a string). But if I try to construct a new record and not specify this parameter I still get a FOREIGN KEY constraint exception. The INSERT statement conflicted with the FOREIGN KEY constraint "FK_FunctionalityTypeInterfaceRelationship"

    Read the article

  • mailto link sharepoint desgner

    - by raklos
    In sharepoint designer I have a control <SharePointWebControls:TextField FieldName="Title" runat="server"></SharePointWebControls:TextField> I want to have a mailto link that will use this title as the subject of the email and i need the body of the email to contain some text e.g. "My Example Text" how do i do this? thanks

    Read the article

  • VS2008 loading designermode takes forever

    - by Younes
    Is there anything that can be done about the Designer mode for a ascx file taking forever to show? I'm currently waiting for like 1 minute and still nothing happened. This is not the first time i see this problem, the ascx file doesn't even have any components to show yet... The web user control is like empty and still it won't load. What can be the problem here?

    Read the article

  • Tools for creating UI prototype.

    - by Golovko
    Hello. I need to create a prototype of ui. I'm googling, and find "Axure RP", but it very expensive for us company. Other way for creating UI prototype is tools like Qt Designer, but it doesn't provide some cool functions and sometimes demand bit programming skills. Do you know freeware tool for my task for man without any programming skills? Thanks. Ps. excuse my english ;)

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • Does LearnDevNow offers a useful subscription for .NET related training : with video and text reference material?

    - by user766926
    Does LearnDevNow offers a useful subscription for .NET video and text reference training material or to learn .NET? I could not find a review of www.learnnowonline.com/learndevnow I've been doing research about places that offer .NET / ASP.NET / C# / LINQ / SQL / JQuery video training that provide excellent material for developers. Kind of like a Lynda.com for back end development. However, I have not find one place that offers quality material that is easily accessible for the user, and that is priced competitive. This is what I'm looking for: Online Video tutorials with free future updates (videos that can be accessed on any device ie: android portable devices, mac / ipad, linux machines) Printable courseware (ie PDFs so that you can take notes, print if necessary, and read in a tablet in case you don;t have internet access) labs, and easy to access code Pre/Post Assessments/Exams for each training to keep track of your progress and what you have learned/skills. (Appdev used to offer this but it was way too expensive (thousands for each training ie 1k for each. I paid about five thousand at Appdev, and now I regret that purchase), and after a few years/months the training became obsolete - outdated) I looked at learnnowonline.com/learndevnow but it seems that their courseware / text reference material can only be accessed online, and don't know if their videos work in all mobile devices, and browsers (Safari, Chrome, IE, Opera, Firefox) Also, it seems Appdev not longer exist and now is called " AppDev is now LearnNowPlus : www.learnnowonline.com/appdev That site not longer offers prices. I tried to find reviews but could not find any. Does anyone know, or can share a review about some of these type of online training service providers? I would appreciate your feedback on this, and if you can share your past experiences with their service or similar/better services that would even be better. Thanks.

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >