Search Results

Search found 27468 results on 1099 pages for 'sql quiz'.

Page 58/1099 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • help with t-sql data aggregation

    - by stackoverflowuser
    Based on the following table Area S1 S2 S3 S4 -------------------- A1 5 10 20 0 A2 11 19 15 20 A3 0 0 0 20 I want to generate an output that will give the number of columns not having "0". So the output would be Area S1 S2 S3 S4 Count ------------------------- A1 5 10 20 0 3 A2 11 19 15 20 4 A3 0 0 0 20 1

    Read the article

  • t-sql getting leaf nodes

    - by stackoverflowuser
    Based on following table (I have kept spaces between the rows for clarity) Path ----------- \node1\node2\node3 \node1\node2\node3\node5 \node1\node6\node3 \node1\node4\node3 \node1\node4\node3\node7 \node1\node4\node3\node8 \node1\node4\node3\node9 \node1\node4\node3\node9\node10 I want to get all the paths containing leaf node. So for instance, following will be considered leaf nodes for path \node1\node4\node3 \node1\node4\node3\node7 \node1\node4\node3\node8 \node1\node4\node3\node9\node10 The following will be the output: Output --------------------------- \node1\node2\node3\node5 \node1\node6\node3 \node1\node4\node3\node7 \node1\node4\node3\node8 \node1\node4\node3\node9\node10 Pls. suggest. Thanks.

    Read the article

  • How to realize this in Transact SQL ( SQL Server 2008 )

    - by David
    I just want an update trigger like this postgresql version... It seems to me there is no NEW. and OLD. in MSSQL? CREATE OR REPLACE FUNCTION "public"."ts_create" () RETURNS trigger AS DECLARE BEGIN NEW.ctime := now(); RETURN NEW; END; Googled already, but nothing to be found unfortunately... Ideas?

    Read the article

  • Introducing Microsoft SQL Server 2008 R2 - Business Intelligence Samples

    - by smisner
    On April 14, 2010, Microsoft Press (blog | twitter) released my latest book, co-authored with Ross Mistry (twitter), as a free ebook download - Introducing Microsoft SQL Server 2008 R2. As the title implies, this ebook is an introduction to the latest SQL Server release. Although you'll find a comprehensive review of the product's features in this book, you will not find the step-by-step details that are typical in my other books. For those readers who are interested in a more interactive learning experience, I have created two samples file for download: IntroSQLServer2008R2Samples project Sales Analysis workbook Here's a recap of the business intelligence chapters and the samples I used to generate the screen shots by chapter: Chapter 6: Scalable Data Warehousing covers a new edition of SQL Server, Parallel Data Warehouse. Understandably, Microsoft did not ship me the software and hardware to set up my own Parallel Data Warehouse environment for testing purposes and consequently you won't see any screenshots in this chapter. I received a lot of information and a lot of help from the product team during the development of this chapter to ensure its technical accuracy. Chapter 7: Master Data Services is a new component in SQL Server. After you install Master Data Services (MDS), which is a separate installation from SQL Server although it's found on the same media, you can install sample models to explore (which is what I did to create screenshots for the book). To do this, you deploying packages found at \Program Files\Microsoft SQL Server\Master Data Services\Samples\Packages. You will first need to use the Configuration Manager (in the Microsoft SQL Server 2008 R2\Master Data Services program group) to create a database and a Web application for MDS. Then when you launch the application, you'll see a Getting Started page which has a Deploy Sample Data link that you can use to deploy any of the sample packages. Chapter 8: Complex Event Processing is an introduction to another new component, StreamInsight. This topic was way too large to cover in-depth in a single chapter, so I focused on information such as architecture, development models, and an overview of the key sections of code you'll need to develop for your own applications. StreamInsight is an engine that operates on data in-flight and as such has no user interface that I could include in the book as screenshots. The November CTP version of SQL Server 2008 R2 included code samples as part of the installation, but these are not the official samples that will eventually be available in Codeplex. At the time of this writing, the samples are not yet published. Chapter 9: Reporting Services Enhancements provides an overview of all the changes to Reporting Services in SQL Server 2008 R2, and there are many! In previous posts, I shared more details than you'll find in the book about new functions (Lookup, MultiLookup, and LookupSet), properties for page numbering, and the new global variable RenderFormat. I will confess that I didn't use actual data in the book for my discussion on the Lookup functions, but I did create real reports for the blog posts and will upload those separately. For the other screenshots and examples in the book, I have created the IntroSQLServer2008R2Samples project for you to download. To preview these reports in Business Intelligence Development Studio, you must have the AdventureWorksDW2008R2 database installed, and you must download and install SQL Server 2008 R2. For the map report, you must execute the PopulationData.sql script that I included in the samples file to add a table to the AdventureWorksDW2008R2 database. The IntroSQLServer2008R2Samples project includes the following files: 01_AggregateOfAggregates.rdl to illustrate the use of embedded aggregate functions 02_RenderFormatAndPaging.rdl to illustrate the use of page break properties (Disabled, ResetPageNumber), the PageName property, and the RenderFormat global variable 03_DataSynchronization.rdl to illustrate the use of the DomainScope property 04_TextboxOrientation.rdl to illustrate the use of the WritingMode property 05_DataBar.rdl 06_Sparklines.rdl 07_Indicators.rdl 08_Map.rdl to illustrate a simple analytical map that uses color to show population counts by state PopulationData.sql to provide the data necessary for the map report Chapter 10: Self-Service Analysis with PowerPivot introduces two new components to the Microsoft BI stack, PowerPivot for Excel and PowerPivot for SharePoint, which you can learn more about at the PowerPivot site. To produce the screenshots for this chapter, I created the Sales Analysis workbook which you can download (although you must have Excel 2010 and the PowerPivot for Excel add-in installed to explore it fully). It's a rather simple workbook because space in the book did not permit a complete exploration of all the wonderful things you can do with PowerPivot. I used a tutorial that was available with the CTP version as a basis for the report so it might look familiar if you've already started learning about PowerPivot. In future posts, I'll continue exploring the new features in greater detail. If there's any special requests, please let me know! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Support for 15,000 Partitions in SQL Server 2008 SP2 and SQL Server 2008 R2 SP1

    In SQL Server 2008 and SQL Server 2008 R2, the number of partitions on tables and indexes is limited to 1,000. This paper discusses how SQL Server 2008 SP2 and SQL Server 2008 R2 SP1 address this limitation by providing an option to increase the limit to 15,000 partitions. It describes how support for 15,000 partitions can be enabled and disabled on a database. It also talks about performance characteristics, certain limitations associated with this support, known issues, and their workarounds. This support is targeted to enterprise customers and ISVs with large-scale decision support or data warehouse requirements. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Data-tier Applications in SQL Server 2008 R2

    - by BuckWoody
    I had the privilege of presenting to the Adelaide SQL Server User Group in Australia last evening, and I covered the Data Access Component (DAC) and the Utility Control Point (UCP) from SQL Server 2008 R2. Here are some links from that presentation:   Whitepaper: http://msdn.microsoft.com/en-us/library/ff381683.aspx Tutorials: http://msdn.microsoft.com/en-us/library/ee210554(SQL.105).aspx From Visual Studio: http://msdn.microsoft.com/en-us/library/dd193245(VS.100).aspx Restrictions and capabilities by Edition: http://msdn.microsoft.com/en-us/library/cc645993(SQL.105).aspx    Glen Berry's Blog entry on scripts for UCP/DAC: http://www.sqlservercentral.com/blogs/glennberry/archive/2010/05/19/sql-server-utility-script-from-24-hours-of-pass.aspx    Objects supported by a DAC: http://msdn.microsoft.com/en-us/library/ee210549(SQL.105).aspx   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Data-tier Applications in SQL Server 2008 R2

    - by BuckWoody
    I had the privilege of presenting to the Adelaide SQL Server User Group in Australia last evening, and I covered the Data Access Component (DAC) and the Utility Control Point (UCP) from SQL Server 2008 R2. Here are some links from that presentation:   Whitepaper: http://msdn.microsoft.com/en-us/library/ff381683.aspx Tutorials: http://msdn.microsoft.com/en-us/library/ee210554(SQL.105).aspx From Visual Studio: http://msdn.microsoft.com/en-us/library/dd193245(VS.100).aspx Restrictions and capabilities by Edition: http://msdn.microsoft.com/en-us/library/cc645993(SQL.105).aspx    Glen Berry's Blog entry on scripts for UCP/DAC: http://www.sqlservercentral.com/blogs/glennberry/archive/2010/05/19/sql-server-utility-script-from-24-hours-of-pass.aspx    Objects supported by a DAC: http://msdn.microsoft.com/en-us/library/ee210549(SQL.105).aspx   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Developer at Oracle Open World 2012

    - by thatjeffsmith
    We have a lot going on in San Francisco this fall. One of the most personal exciting bits, for what will be my 4th or 5th Open World, is that this will be my FIRST as a member of Team Oracle. I’ve presented once before, but most years it was just me pressing flesh at the vendor booths. After 3-4 days of standing and talking, you’re ready to just go home and not do anything for a few weeks. This time I’ll have a chance to walk around and talk with our users and get a good idea of what’s working and what’s not. Of course it will be a great opportunity for you to find us and get to know your SQL Developer team! 3.4 miles across and back – thanks Ashley for signing me up for the run! This year is going to be a bit crazy. Work wise I’ll be presenting twice, working a booth, and proctoring several of our Hands-On Labs. The fun parts will be equally crazy though – running across the Bay Bridge (I don’t run), swimming the Bay (I don’t swim), having my wife fly out on Wednesday for the concert, and then our first WhiskyFest on Friday (I do drink whisky though.) But back to work – let’s talk about EVERYTHING you can expect from the SQL Developer team. Booth Hours We’ll have 2 ‘demo pods’ in the Exhibition Hall over at Moscone South. Look for the farm of Oracle booths, we’ll be there under the signs that say ‘SQL Developer.’ There will be several people on hand, mostly developers (yes, they still count as people), who can answer your questions or demo the latest features. Come by and say ‘Hi!’, and let us know what you like and what you think we can do better. Seriously. Monday 10AM – 6PM Tuesday 9:45AM – 6PM Wednesday 9:45AM – 4PM Presentations Stop by for an hour, pull up a chair, sit back and soak in all the SQL Developer goodness. You’ll only have to suffer my bad jokes for two of the presentations, so please at least try to come to the other ones. We’ll be talking about data modeling, migrations, source control, and new features in versions 3.1 and 3.2 of SQL Developer and SQL Developer Data Modeler. Day Time Event Monday 10:454:45 What’s New in SQL Developer Why Move to Oracle Application Express Listener Tueday 10:1511:455:00 Using Subversion in Oracle SQL Developer Data Modeler Oracle SQL Developer Tips & Tricks Database Design with Oracle SQL Developer Data Modeler Wednesday 11:453:30 Migrating Third-Party Databases and Applications to Oracle Exadata 11g Enterprise Options and Management Packs for Developers Hands On Labs (HOLs) The Hands On Labs allow you to come into a classroom environment, sit down at a computer, and run through some exercises. We’ll provide the hardware, software, and training materials. It’s self-paced, but we’ll have several helpers walking around to answer questions and chat up any SQL Developer or database topic that comes to mind. If your employer is sending you to Open World for all that great training, the HOLs are a great opportunity to capitalize on that. They are only 60 minutes each, so you don’t have to worry about burning out. And there’s no homework! Of course, if you do want to take the labs home with you, many are already available via the Developer Day Hands-On Database Applications Developer Lab. You will need your own computer for those, but we’ll take care of the rest. Wednesday PL/SQL Development and Unit Testing with Oracle SQL Developer 10:15 Performance Tuning with Oracle SQL Developer 11:45 Thursday The Soup to Nuts of Data Modeling with Oracle SQL Developer Data Modeler 11:15 Some Parting Advice Always wanted to meet your favorite Oracle authors, speakers, and thought-leaders? Don’t be shy, walk right up to them and introduce yourself. Normal social rules still apply, but at the conference everyone is open and up for meeting and talking with attendees. Just understand if there’s a line that you might only get a minute or two. It’s a LONG conference though, so you’ll have plenty of time to catch up with everyone. If you’re going to be around on Tuesday evening, head on over to the OTN Lounge from 4:30 to 6:30 and hang out for our Tweet Meet. That’s right, all the Oracle nerds on Twitter will be there in one place. Be sure to put your Twitter handle on your name tag so we know who you are!

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Best practice to query data from MS SQL Server in C Sharp?

    - by Bruno
    What is the best way to query data from a MS SQL Server in C Sharp? I know that it is not good practice to have an SQL query in the code. Is the best way to create a stored procedure and call it from C Sharp with parameters? using (var conn = new SqlConnection(connStr)) using (var command = new SqlCommand("StoredProc", conn) { CommandType = CommandType.StoredProcedure }) { conn.Open(); command.ExecuteNonQuery(); conn.Close(); }

    Read the article

  • How can I create The Oatmeal like quizzes (http://theoatmeal.com/quizzes) using Drupal module quiz ?

    - by vr3690
    Hi, I am trying to create quizzes which are kind of like the ones found here : http://theoatmeal.com/quizzes on my drupal site. I am trying to use drupal's quiz module ( http://drupal.org/project/quiz ) Basically everyone answer, in every question, in a quiz will have some particular weightage. Say answer 1 will have 2 marks, ans two will have 3 marks, answer 3 will have 4 marks.. and so on. Eventually all these get added up and the result is shown according to the final tally of marks. Can anyone show me the steps as to how to make such quizzes using the quiz module or some other module/method..

    Read the article

  • The best, in the West

    - by Fatherjack
    As many of you know, I run the SQL South West user group and we are currently in full flow preparing to stage the UK’s second SQL Saturday. The SQL Saturday spotlight is going to fall on Exeter in March 2013. We have full-day session on Friday 8th with some truly amazing speakers giving their insights and experience into some vital areas of working with SQL Server: Dave Ballantyne and Dave Morrison – TSQL and internals Christian Bolton and Gavin Payne – Mission critical data platforms on Windows Server 2012 Denny Cherry – SQL Server Security André Kamman – Powershell 3.0 for SQL Server Administrators and Developers Mladen Prajdic – From SQL Traces to Extended Events – The next big switch. A number of people have claimed that the choice is too good and they’d have trouble selecting just one session to attend. I can see how this is a problem but hope that they make their minds up quickly. The venue is a bespoke conference suite in the centre of Exeter but has limited capacity so we are working on a first-come first-served basis. All the session details and booking and travel information can be found on our user group website. The Saturday will be a day of free, 50 minute sessions on all aspects SQL Server from almost 30 different speakers. If you would like to submit a session then get a move on as submissions close on 8th January 2013 (That’s less than a month away). We are really interested in getting new speakers started so we have a lightning talk session where you can come along and give a small talk (anywhere from 5 to 15 minutes long) about anything connected with SQL Server as a way to introduce you to what it’s like to be a speaker at an event. Details on registering to attend and to submit a session (Lightning talks need to be submitted too please) can be found on our SQL Saturday pages. This is going to be the biggest and best bespoke SQL Server conference to ever take place this far South West in the UK and we aim to give everyone who comes to either day a real experience of the South West so we have a few surprises for you on the day.

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • SQL Server for the Oracle DBA Links

    - by BuckWoody
    I do a presentation (and a class) called "SQL Server for the Oracle DBA". It's a non-marketing overview that gives you the basics of working with SQL Server if you're already familiar wtih how Oracle works. This class and these links DO NOT help you with "Why should I use Oracle/SQL Server instead of Oracle/SQL Server" - I'll assume you're already there, and if not, there are LOTS of sites to help you make that decision. Although these links might contain slight marketing slants (I don't control them) I've tried to get the best links I can. Feel free to comment here to add more/better links. As such, these aren't links that help you work with Oracle - they are links to help you work with SQL Server. Some of them contain more information than you actually need, others don't have near enough. Taken together (and with the class) you're able to get done what you need to do. "Practical SQL Server for Oracle Professionals" - A Microsoft Whitepaper, probably the best place to get started: http://download.microsoft.com/download/6/9/d/69d1fea7-5b42-437a-b3ba-a4ad13e34ef6/SQLServer2008forOracle.docx Free Training: http://technet.microsoft.com/en-us/sqlserver/dd548020.aspx Classroom training (will cost you): http://www.microsoft.com/learning/en/us/course.aspx?ID=50068A&locale=en-us Terminology Differences: http://www.associatedcontent.com/article/2383466/oracle_and_sql_server_basic_terminology.html Datatype mapping between Oracle and SQL Server: http://msdn.microsoft.com/en-us/library/ms151817.aspx The "other" direction - can still be useful for the Oracle professional to see the other side: http://blog.benday.com/archive/2008/10/23/23195.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • WordPress is now nicely supported on SQL Server (and SQL Azure for that matter)

    - by Eric Nelson
    WordPress is enormously popular for blogs and full websites thanks to an awesome eco system which has built up around it, the simplicity (relatively) of getting it up and running plus the flexibility to “bend it” in all sorts of directions. When I say bend, check out the following which are all WordPress sites My “back up blog” http://iupdateable.wordpress.com/  My groups “odd site” :) http://ubelly.com My favourite “cheap games” site http://www.frugalgaming.co.uk/  WordPress users typically run their sites on Linux and MySQL, although PHP (the language in which WordPress is written) can be happily run on Windows. Both fine technologies in their own right, but for me (and probably a fair few others) I would love to use WordPress but with the technologies I know best (aka Windows, IIS and SQL Server). However, that has proven to be actually rather tricky in practice to get working – until now. Earlier last month OmniTI released a patch for WordPress which provides SQL Server and SQL Azure support.  In parallel with that some fine folks inside Microsoft have also created http://wordpress.visitmix.com which contains information about running WordPress on the Microsoft platform with a particular focus on SQL Server and SQL Azure.  Top stuff! To run WordPress with SQL Server: Download and Install the WordPress on SQL Server Distro/Patch And then you will quite likely need to migrate: Check out how to Migrate to Windows and SQL Server by Zach Owens who is moving his blog to Windows and SQL Server Enjoy Related Links Running PHP on IIS on Windows http://php.iis.net/  If PHP is not your thing, then the following Blog engines are .NET based BlogEngine http://www.dotnetblogengine.net/ DasBlog http://www.dasblog.info/ Subtext http://subtextproject.com/ (which happens to power http://geekswithblogs.net where my main blog is http://geekswithblogs.net/iupdateable)

    Read the article

  • SQL Server: Must numbers all be specified with latin numeral digits?

    - by Ian Boyd
    Does SQL server expect numbers to be specified with digits from the latin alphabet, e.g.: 0123456789 Is it valid to give SQL Server digits in other alphabets? Rosetta Stone: Latin: 01234567890 Arabic: ?????????? Bengali: ?????????? i know that the client (ADO) will convert 8-bit strings to 16-bit unicode strings using the current culture. But the client is also converting numbers to strings using their current culture, e.g.: SELECT * FROM Inventory WHERE Quantity > ???,?? Which throws SQL Server for fits. i know that the server/database has it's defined code page and locale, but that is for strings. Will SQL Server interpret numbers using the active (or per-login specified) locale, or must all numeric values be specifid with latin numeral digits?

    Read the article

  • How do you think while formulating Sql Queries. Is it an experience or a concept ?

    - by Shantanu Gupta
    I have been working on sql server and front end coding and have usually faced problem formulating queries. I do understand most of the concepts of sql that are needed in formulating queries but whenever some new functionality comes into the picture that can be dont using sql query, i do usually fails resolving them. I am very comfortable with select queries using joins and all such things but when it comes to DML operation i usually fails For every query that i never done before I usually finds uncomfortable with that while creating them. Whenever I goes for an interview I usually faces this problem. Is it their some concept behind approaching on formulating sql queries. Eg. I need to create an sql query such that A table contain single column having duplicate record. I need to remove duplicate records. I know i can find the solution to this query very easily on Googling, but I want to know how everyone comes to the desired result. Is it something like Practice Makes Man Perfect i.e. once you did it, next time you will be able to formulate or their is some logic or concept behind. I could have get my answer of solving above problem simply by posting it on stackoverflow and i would have been with an answer within 5 to 10 minutes but I want to know the reason. How do you work on any new kind of query. Is it a major contribution of experience or some an implementation of concepts. Whenever I learns some new thing in coding section I tries to utilize it wherever I can use it. But here scenario seems to be changed because might be i am lagging in some concepts.

    Read the article

  • which lightweight SQL Server type could I use on my Dev machine for a C# VS2010 project?

    - by Greg
    Hi, Which lightweight SQL Server type could I use on my Dev machine for a C# VS2010 project? (e.g. sql server express, sql server ce, full version etc). That is, I'm running on a VMWare fusion instance on my MacBook, and just want something to develop against for a C# VS2010 project. I'm planning on having a simple database (not many tables) but will use Entity Framework. I haven't used SQL Server before so a quick pointer re what is the best database admin interface/app to use for the version you recommend (e.g. to create database, tables etc).

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Stephen Jennings
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all databases, and choose a path to back up to. When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) ------------------------------ Program Location: at Microsoft.SqlServer.Management.SqlManagerUI.MaintenancePlanMenu_Run.PerformActions() I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. While searching the web for clues, the only solution I've run across so far suggests running sp_configure 'allow_update', 0. I tried this but allow_update was already set to 0 and it did not fix the problem. The Windows server and SQL Server have all updates applied to them. Thanks for any suggestions!

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • Why would the SQL 2008 "Generate scripts..." utility generate an invalid SQL script?

    - by Deane
    I have a SQL2008 database that needs to be restored to a SQL2005 instance. I have gone through the "Generate scripts..." wizard, set it for SQL2005 compatibility, and generated a 62MB SQL script. When I run it on the SQL2005 instance, it throws all kinds of errors, and some of them are really strange in that they describe an invalid database. FK constraints are wrong. It's trying to create FKs on columns that don't exist. It's trying insert records with duplicate key errors. It's trying to create the same objects twice. Any idea how this could happen? This SQL script was generated by SQL Server Management Studio just minutes before I tried to restore it, and was not modified. Why would this generate an invalid SQL file? Doesn't it just describe the SQL2008 database, which is presumably valid since we're using it? In particular, the duplicate key insertion errors mystify me. If there's a key constraint in the SQL script, then there must be the same thing in the SQL2008 table. So how could we get rows in there that violate that key constraint?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >