Search Results

Search found 36632 results on 1466 pages for 'sql tool'.

Page 175/1466 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How to perform Linq select new with datetime in SQL 2008

    - by kd7iwp
    In our C# code I recently changed a line from inside a linq-to-sql select new query as follows: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.Year.ToString() + "-" + p.OrderDate.Value.Month.ToString() + "-" + p.OrderDate.Value.Day.ToString() : "") To: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") The change makes the line smaller and cleaner. It also works fine with our SQL 2008 database in our development environment. However, when the code deployed to our production environment which uses SQL 2005 I received an exception stating: Nullable Type must have a value. For further analysis I copied (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") into a string (outside of a Linq statement) and had no problems at all, so it only causes an in issue inside my Linq. Is this problem just something to do with SQL 2005 using different date formats than from SQL 2008? Here's more of the Linq: dt = FilteredOrders.Where(x => x != null).Select(p => new { Order = p.OrderId, link = "/order/" + p.OrderId.ToString(), StudentId = (p.PersonId.HasValue ? p.PersonId.Value : 0), FirstName = p.IdentifierAccount.Person.FirstName, LastName = p.IdentifierAccount.Person.LastName, DeliverBy = p.DeliverBy, OrderDate = p.OrderDate.HasValue ? p.OrderDate.Value.Date.ToString("yyyy-mm-dd") : ""}).ToDataTable(); This is selecting from a List of Order objects. The FilteredOrders list is from another linq-to-sql query and I call .AsEnumerable on it before giving it to this particular select new query. Doing this in regular code works fine: if (o.OrderDate.HasValue) tempString += " " + o.OrderDate.Value.Date.ToString("yyyy-mm-dd");

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • Best way to randomly select columns from random rows of SQL results.

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • TechEd 2010 Schedule and Twitter Tool

    - by Scott Dorman
    If you’re going to TechEd 2010* (North America), be sure to check out the TechEd 2010 Schedule and Twitter Tool. If you have a mobile device (Windows Mobile 6.5, Android, iPhone), a tablet (Windows 7, iPad), or even a laptop (Windows Vista or Windows 7 gadgets) then this tool is essential. It allows you to view all of the session details and build your own customized schedule. You can also keep up with all of the TechEd related Twitter traffic from the same application. By default, the #TechEd hashtag is tracked, but you can add your own favorite hash tags as well. You can also send tweets using SMS. Technorati Tags: TechEd

    Read the article

  • Can you add identity to existing column in sql server 2008?

    - by bmutch
    In all my searching I see that you essentially have to copy the existing table to a new table to chance to identity column for pre-2008, does this apply to 2008 also? thanks. most concise solution I have found so far: CREATE TABLE Test ( id int identity(1,1), somecolumn varchar(10) ); INSERT INTO Test VALUES ('Hello'); INSERT INTO Test VALUES ('World'); -- copy the table. use same schema, but no identity CREATE TABLE Test2 ( id int NOT NULL, somecolumn varchar(10) ); ALTER TABLE Test SWITCH TO Test2; -- drop the original (now empty) table DROP TABLE Test; -- rename new table to old table's name EXEC sp_rename 'Test2','Test'; -- see same records SELECT * FROM Test;

    Read the article

  • SQL query for getting data in two fields from one column.

    - by AmiT
    I have a table [Tbl1] containing two fields. ID as int And TextValue as nvarchar(max) Suppose there are 7 records. I need a resultset that has two columns Text1 and Text2. The Text1 should have first 4 records and Text2 should have remaining 3 records. [Tbl1] ID | TextValue 1. | Apple 2. | Mango 3. | Orange 4. | Pineapple 5. | Banana 6. | Grapes 7. | Sapota Now, the result-set should have Text1 | Text2 Apple | Banana Mango | Grapes Orange | Sapota Pineapple |

    Read the article

  • Is there a way to only backup a SQL 2005 database structure fully, but only the data in a certain se

    - by TheSoftwareJedi
    I have several schemas in my database, and the largest one ("large" meaning disk space consumed) is my "web" schema which is a denormalized copy of data in the operational schemas. This denormalized data is able to be reconstructed at anytime, and is merely there for extremely fast read purposes. Since the data is redundant, and VERY large - I'd like to exclude it from being backed up. I already have stored procedures that can regenerate all of the data in that schema in a couple of hours - for use in the event of a failure. I assume I can split the tables in this schema out to another data file or such (ideally even on another drive for faster reads), but is there a way to never have that data file backup, yet still in the event of a failure its structure could be restored (and other DDL stuff like procs, views, etc)? Somewhat related, can I also have these tables not do transaction logging, if I go to "Full" backup mode for the rest of the database?

    Read the article

  • schema.org 'reviewRating' tag not recognized by google snippet testing tool

    - by saravanak
    I'm trying to add more structural information to my webpages by using the microdata format suggested in www.schema.org. The procedure seems straight forward but I'm having issues validating my results in the Google Rich Snippets Testing Tool. Check out this review page, here I'm using the 'reviewRating' property item to specify rating values for that particular review. I followed the same format as defined in schema.org/Rating but this markup fails validation in Google's rich snippet testing tool with the following error info. Item Type: http://schema.org/rating reviewrating = 5 ratingvalue = 5 Warning: Property "reviewrating" was not found.

    Read the article

  • Is it safe to convert varchar and char into nvarchar and nchar in SQL Server?

    - by Svish
    We currently have a number of columns in the database which are of type varchar. The application that uses them are in C# and it uses Linq2Sql for the communication (or what to call it). We would like to support unicode characters, which means we would have to convert the varchar columns into nvarchar. Is this a safe operation? Is it just a matter of changing the column type and updating the dbml file, or are there more stuff that needs to be done? Any changes in the C# code? Do I need to somehow convert the text that already exist in the database manually, or is it handled for me?

    Read the article

  • SQL Source Control with Vault Support Officially Released

    - by Ajarn Mark Caldwell
    HOORAY!  It is officially here!  Today, Red-Gate officially released SQL Source Control version 2.1 with support for Vault. While we have been happily and successfully running the beta version (a.k.a. the Early Access release) of Red-Gate SQL Source Control with support for Vault for quite a while, it is good to have the official RTM (or GOLD, or PROD, or whatever you call your “no-longer-in-beta”) release of the product. As a courtesy to those who have not already read the series, allow me to provide you with these links to my previous posts about this fantastic tool. Using SQL Source Control with Fortress or Vault – Part 1 – Introduction and initial thoughts about the tool and source controlling SQL code in general. Using SQL Source Control with Fortress or Vault – Part 2 – Additional details about included features and a few warnings. Source Control and SQL Development – Part 3 – How we did it in the good ol’ days before this product came along. Using SQL Source Control and Vault Professional Part 4 – A few closing thoughts.

    Read the article

  • Tool Compare the tables in two different databeses

    - by user191124
    I am using Toad. Frequently i need to compare tables in two different test environments. the tables present in them are same but the data differs. i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare. Much appreciate your help:)

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Red Samurai Performance Audit Tool – OOW 2013 release (v 1.1)

    - by JuergenKress
    We are running our Red Samurai Performance Audit tool and monitoring ADF performance in various projects already for about one year and the half. It helps us a lot to understand ADF performance bottlenecks and tune slow ADF BC View Objects or optimise large ADF BC fetches from DB. There is special update implemented for OOW'13 - advanced ADF BC statistics are collected directly from your application ADF BC runtime and later displayed as graphical information in the dashboard. I will be attending OOW'13 in San Francisco, feel free to stop me and ask about this tool - I will be happy to give it away and explain how to use it in your project. Original audit screen with ADF BC performance issues, this is part of our Audit console application: Audit console v1.1 is improved with one more tab - Statistics. This tab displays all SQL Selects statements produced by ADF BC over time, logged users, AM access load distribution and number of AM activations along with user sessions. Available graphs: Daily Queries  - total number of SQL selects per day Hourly Queries - Last 48 Hours Logged Users - total number of user sessions per day SQL Selects per Application Module - workload per Application Module Number of Activations and User sessions - last 48 hours - displays stress load Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Red Samurai,ADF performance,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Unattended Install of SQL Server 2005 Express with LOCAL Server InstanceName

    - by Jeff
    I'm creating an install package using InnoSetup and installing SQL Server 2005 Express. Here's the code below that appears in my RUN section: Filename: "{app}\SQL Server 2005 Express\SQLEXPR.exe" ; Parameters: "-q /norebootchk /qn reboot=ReallySuppress addlocal=all INSTANCENAME=(LOCAL) SCCCHECKLEVEL=IncompatibleComponents:1;MDAC25Version:0 ERRORREPORTING=2 SQLAUTOSTART=1 SAPWD=passwordhere SECURITYMODE=SQL"; WorkingDir: {app}\SQL Server 2005 Express; StatusMsg: Installing Microsoft SQL Server 2005 Express... Please Wait...;Check:SQLVerifyInstall What I'm trying to accomplish is have the SQL Server package install but only have the instance name itself reference the name of the machine name and nothing more. What I'm receiving instead is a named instance instead of local such as MachineName\SQLEXPRESS which is not what I want to receive. I need a local instance instead of a named instance due to the way my code is written to be able to install and talk with the databases in question. I would change it, trust me, were it not the fact that this install package is a replacement to a previous package that used the MSDE installer. I have to be able to support both through code. Any suggestions are welcome but a clear and concise method to get the installer to quietly install using only the machine name is my main goal. Thanks for the help and support!

    Read the article

  • Wiki based requirements engineering tool

    - by Shanon
    Hi, I'm looking to to build a wiki based tool the helps/aides in the requirements engineering process. More specifically I am hoping to end up with a tool that helps inexperienced users easily create and design requirements documents on a wiki platform. I was wondering if there exist any wiki/wiki platforms that either already exist or are easily extendible or would be worth looking at that for this purpose. For instance some of the features I was hoping to add would be to add structure to a document so that information is filled out in a standardised manner. Another idea I was looking at was to somehow create relationships between different types of documents (for example- a goal diagram gets evolves/ helps in the development of the class diagram). So far I have come across FOSwiki which claims to to fully customisalble...but I'm not sure what it means and what I can really do with that. Any input on FOSwiki is also highly appreciated.

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • Gladinet Cloud Desktop tool to manage Windows Azure Blob Storage from Windows Explorer

    - by kaleidoscope
    Gladinet Cloud Desktop is designed to make it easier for Windows Azure users to manage Windows Azure Blob storage directly from Windows Explorer. The solution makes it possible for Windows Azure Blob storage to be mapped as a virtual network Drive. “You can map multiple Azure Blob Storage Accounts as side-by-side virtual folders in same network drive. You can do drag & drop between local drive and Azure drive. For more information -  http://www.ditii.com/2010/01/04/gladinet-cloud-desktop-tool-to-manage-windows-azure-blob-storage-from-windows-explorer/ For Downloading the tool – http://www.gladinet.com/p/download_starter_direct.htm   Ritesh, D

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • How Do I Escape Apostrophes in Field Valued in SQL Server?

    - by Mikecancook
    I asked a question a couple days ago about creating INSERTs by running a SELECT to move data to another server. That worked great until I ran into a table that has full on HTML and apostrophes in it. What's the best way to deal with this? Lucking there aren't too many rows so it is feasible as a last resort to 'copy and paste'. But, eventually I will need to do this and the table by that time will probably be way too big to copy and paste these HTML fields. This is what I have now: select 'Insert into userwidget ([Type],[UserName],[Title],[Description],[Data],[HtmlOutput],[DisplayOrder],[RealTime],[SubDisplayOrder]) VALUES (' + ISNULL('N'''+Convert(varchar(8000),Type)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Username)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Title)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Description)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Data)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),HTMLOutput)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),DisplayOrder)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),RealTime)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),SubDisplayOrder)+'''','NULL') + ')' from userwidget Which is works fine except those pesky apostrophes in the HTMLOutput field. Can I escape them by having the query double up on the apostrophes or is there a way of encoding the field result so it won't matter?

    Read the article

  • SQL Server Command Timeouts Timeout expired. The timeout period elapsed prior to completion of the o

    - by user104295
    Hi there, we've had a ASP.NET web application deployed for a number of years now.... last week we migrated to a slightly slower server to save some money. Now we're frequently getting command timeouts: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding This is understandable in that a slower server will take longer to produce results; take longer time and produce a timeout. Is there any system-wide way to get SqlClient to set a longer timeout? We cannot change the code, as it's everywhere... we're using multiple data access technologies as well. Maybe there's a default command timeout setting on a connection string? We just need to increase it by 30 seconds; we're happy to wait a bit longer for queries to return. thanks

    Read the article

  • My SQL query is only returning I need the parent aswell

    - by sico87
    My sql query is only returning the children of the parent I need it to return the parent as well, public function getNav($cat,$subcat){ //gets all sub categories for a specific category if(!$this->checkValue($cat)) return false; //checks data $query = false; if($cat=='NULL'){ $sql = "SELECT itemID, title, parent, url, description, image FROM p_cat WHERE deleted = 0 AND parent is NULL ORDER BY position;"; $query = $this->db->query($sql) or die($this->db->error); }else{ //die($cat); $sql = "SET @parent = (SELECT c.itemID FROM p_cat c WHERE url = '".$this->sql($cat)."' AND deleted = 0); SELECT c1.itemID, c1.title, c1.parent, c1.url, c1.description, c1.image, (SELECT c2.url FROM p_cat c2 WHERE c2.itemID = c1.parent LIMIT 1) as parentUrl FROM p_cat c1 WHERE c1.deleted = 0 AND c1.parent = @parent ORDER BY c1.position;"; $query = $this->db->multi_query($sql) or die($this->db->error); $this->db->store_result(); $this->db->next_result(); $query = $this->db->store_result(); } return $query; }

    Read the article

  • Turning on multiple result sets in an ODBC connection to SQL Server

    - by David Griffiths
    I have an application that originally needed to connect to Sybase (via ODBC), but I've needed to add the ability to connect to SQL Server as well. As ODBC should be able to handle both, I thought I was in a good position. Unfort, SQL Server will not let me, by default, nest ODBC commands and ODBCDataReaders - it complains the connection is busy. I know that I had to specify that multiple active result sets (MARS) were allowed in similar circumstances when connecting to SQL Server via a native driver, so I thought it wouldn't be an issue. The DSN wizard has no entr y when creating a SystemDSN. Some people have provided registry hacks to get around this, but this did not work (add a MARS_Connection with a value of Yes to HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\system-dsn-name). Another suggestion was to create a file-dsn, and add "MARS_Connection=YES" to that. Didn't work. Finally, a DSN-less connection string. I've tried this one (using MultipleActiveResultSets - same variable as a Sql Server connection would use), "Driver={SQL Native Client};Server=xxx.xxx.xxx.xxx;Database=someDB;Uid=u;Pwd=p;MultipleActiveResultSets=True;" and this one: "Driver={SQL Native Client};Server=192.168.75.33\\ARIA;Database=Aria;Uid=sa;Pwd=service;MARS_Connection=YES;" I have checked the various connection-string sites - they all suggest what I've already tried.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >