Search Results

Search found 9396 results on 376 pages for 'stored procedures'.

Page 70/376 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • casting odd smallint time to to datetime format.

    - by c6400sc
    Hello everyone, I'm working with a db (SQL server 2008), and have an interesting issue with times stored in the db. The DBA who originally set it up was crafty and stored scheduled times as smallints in 12-hour form-- 6:00AM would be represented as 600. I've figured out how to split them into hours and minutes like thus: select floor(time/100) as hr, right(time, 2) as min from table; What I want to do is compare these scheduled times to actual times, which are stored in the proper datetime format. Ideally, I would do this with two datetime fields and use datediff() between them, but this would require converting the smallint time into a datetime, which I can't figure out. Does anyone have suggestions on how to do this? Thanks in advance.

    Read the article

  • Why does my CLR function keep disappearing

    - by user208080
    Hi there. I am a rookie to SQL and here is my questions. I have some CLR sql functions and procedures. When I deploy the 1st one, everything is fine. But after the 2nd one deployed, the first one will disappear. Anyone can help me out? Thanks a lot Actually, I simply create a new SQL project in VS, adding a new function or stored procedure, click deploy, and I can see the new function in my SQL instance. Then I close that project and open a new one, repeat the above steps, OK, the 2nd function is there i my instance but the 1st one disappeared or be replaced and no longer queryable for use. Thank you for your reply. All these clr functions and procedures are in the same instance of the database.

    Read the article

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • What's the best way to return something like a collection of `std::auto_ptr`s in C++03?

    - by Billy ONeal
    std::auto_ptr is not allowed to be stored in an STL container, such as std::vector. However, occasionally there are cases where I need to return a collection of polymorphic objects, and therefore I can't return a vector of objects (due to the slicing problem). I can use std::tr1::shared_ptr and stick those in the vector, but then I have to pay a high price of maintaining separate reference counts, and object that owns the actual memory (the container) no longer logically "owns" the objects because they can be copied out of it without regard to ownership. C++0x offers a perfect solution to this problem in the form of std::vector<std::unique_ptr<t>>, but I don't have access to C++0x. Some other notes: I don't have access to C++0x, but I do have TR1 available. I would like to avoid use of Boost (though it is available if there is no other option) I am aware of boost::ptr_container containers (i.e. boost::ptr_vector), but I would like to avoid this because it breaks the debugger (innards are stored in void *s which means it's difficult to view the object actually stored inside the container in the debugger)

    Read the article

  • How can I ensure that nested transactions are committed independently of each other?

    - by Caldera
    If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others? In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)? I have a stored procedure defined something like this in SQL Server 2000: CREATE PROCEDURE toplevel_proc .. AS BEGIN ... while @row_count <= @max_rows begin select @parameter ... where rownum = @row_count exec nested_proc @parameter select @row_count = @row_count + 1 end END

    Read the article

  • Is there a tool that will show diagrams of my SQL database in real-time?

    - by Rising Star
    I've created some diagrams of SQL tables using the "Reverse Engineer" feature of Microsoft Office Visio. I like being able to visualize my relational databases in this manner. However, what I get is just a static document that I can print, e-mail to colleagues, and click widgets on. Earlier this year, I saw at a demo that the new version of Visual Studio 2010 has a new feature called the "Architect Explorer", which allows developers to view relationships among .net classes on the fly. It has many features for filtering the data that the developer is interested in. It would be really awesome if I could visually browse my tables and stored procedures and see what is related to what by primary key, foreign key, and referenced in stored procedures. I realize that I'm talking about two entirely different technologies and it's not a perfect analogy, but is there some similar tool that would allow me to visualize tables in my SQL database?

    Read the article

  • Which route should i take for web hosting?

    - by Undermine2k
    Hi I am setting up a small website sort of like an online portfolio. I made the mistake of signing up for shared-web hosting before asking if they supported stored procedures which took me half the day to figure out they didn't. Basically i'm looking for a site that offers me PHP5.4+/ Mysql 5.5 + with support for triggers/stored procedures/ and if possible MyphpAdmin 3.5.1. I also have a domain name I already registered and which I would like to use. What is my best option to look for hosting provider which offers this functionality or to setup a VPS?

    Read the article

  • More than 100,000 articles !

    - by developerit
    In one month, we already got more than 100,000, and we continue to crawl! We plan on hitting 250,000 total articles next month. Due to the large amount of data we are gathering, we are planning on updating our SQL stored procedure to improve performance. We may be migrating to SQL Server 2008 Entreprise, as we are currently running on SQL Server 2005 Express Edition… We are at 400 Mb of data, getting more and more close to the 2 Gb limit. Stay tune for more info and browse daily fresh articles about web development.

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • Static vs Singleton in C# (Difference between Singleton and Static)

    - by Jalpesh P. Vadgama
    Recently I have came across a question what is the difference between Static and Singleton classes. So I thought it will be a good idea to share blog post about it.Difference between Static and Singleton classes:A singleton classes allowed to create a only single instance or particular class. That instance can be treated as normal object. You can pass that object to a method as parameter or you can call the class method with that Singleton object. While static class can have only static methods and you can not pass static class as parameter.We can implement the interfaces with the Singleton class while we can not implement the interfaces with static classes.We can clone the object of Singleton classes we can not clone the object of static classes.Singleton objects stored on heap while static class stored in stack.more at my personal blog: dotnetjalps.com

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • The penultimate audit trigger framework

    - by Piotr Rodak
    So, it’s time to see what I came up with after some time of playing with COLUMNS_UPDATED() and bitmasks. The first part of this miniseries describes the mechanics of the encoding which columns are updated within DML operation. The task I was faced with was to prepare an audit framework that will be fairly easy to use. The audited tables were to be the ones directly modified by user applications, not the ones heavily used by batch or ETL processes. The framework consists of several tables and procedures...(read more)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment

    - by pinaldave
    Filestream is a very interesting feature, and an enhancement of FileTable with Filestream is equally exciting. Today in this post, we will learn how to set up the FileTable Environment in SQL Server. The major advantage of FileTable is it has Windows API compatibility for file data stored within an SQL Server database. In simpler words, FileTables remove a barrier so that SQL Server can be used for the storage and management of unstructured data that are currently residing as files on file servers. Another advantage is that the Windows Application Compatibility for their existing Windows applications enables to see these data as files in the file system. This way, you can use SQL Server to access the data using T-SQL enhancements, and Windows can access the file using its applications. So for the first step, you will need to enable the Filestream feature at the database level in order to use the FileTable. -- Enable Filestream EXEC sp_configure filestream_access_level, 2 RECONFIGURE GO -- Create Database CREATE DATABASE FileTableDB ON PRIMARY (Name = FileTableDB, FILENAME = 'D:\FileTable\FTDB.mdf'), FILEGROUP FTFG CONTAINS FILESTREAM (NAME = FileTableFS, FILENAME='D:\FileTable\FS') LOG ON (Name = FileTableDBLog, FILENAME = 'D:\FileTable\FTDBLog.ldf') WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTableDB'); GO Now, you can run the following code and figure out if FileStream options are enabled at the database level. -- Check the Filestream Options SELECT DB_NAME(database_id), non_transacted_access, non_transacted_access_desc FROM sys.database_filestream_options; GO You can see the resultset of the above query which returns resultset as the following image shows. As you can see , the file level access is set to 2 (filestream enabled). Now let us create the filetable in the newly created database. -- Create FileTable Table USE FileTableDB GO CREATE TABLE FileTableTb AS FileTable WITH (FileTable_Directory = 'FileTableTb_Dir'); GO Now you can select data using a regular select table. SELECT * FROM FileTableTb GO It will return all the important columns which are related to the file. It will provide details like filesize, archived, file types etc. You can also see the FileTable in SQL Server Management Studio. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables Here, you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created FileTableTb. You can right click on the newly created table and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When you click on the option, it will open up the following folder in my local machine where the FileTable data will be stored: \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir In tomorrow’s blog post as Part 2, we will go over two methods of inserting the data into this FileTable. Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • Storing large data in HTTP Session (Java Application)

    - by Umesh Awasthi
    I am asking this question in continuation with http-session-or-database-approach. I am planning to follow this approach. When user add product to cart, create a Cart Model, add items to cart and save to DB. Convert Cart model to cart data and save it to HTTP session. Any update/ edit update underlying cart in DB and update data snap shot in Session. When user click on view cart page, just pick cart data from Session and display to customer. I have following queries regarding HTTP Session How good is it to store large data (Shopping Cart) in Session? How scalable this approach can be ? (With respect to Session) Won't my application going to eat and demand a lot of memory? Is my approach is fine or do i need to consider other points while designing this? Though, we can control what all cart data should be stored in the Session, but still we need to have certain information in cart data being stored in session?

    Read the article

  • SQL SERVER – Three Puzzling Questions – Need Your Answer

    - by pinaldave
    Last week I had asked three questions on my blog. I got very good response to the questions. I am planning to write summary post for each of three questions next week. Before I write summary post and give credit to all the valid answers. I was wondering if I can bring to notice of all of you this week. Why SELECT * throws an error but SELECT COUNT(*) does not This is indeed very interesting question as not quite many realize that this kind of behavior SQL Server demonstrates out of the box. Once you run both the code and read the explanation it totally makes sense why SQL Server is behaving how it is behaving. Also there is connect item is associated with it. Also read the very first comment by Rob Farley it also shares very interesting detail. Statistics are not Updated but are Created Once This puzzle has multiple right answer. I am glad to see many of the correct answer as a comment to this blog post. Statistics are very important and it really helps SQL Server Engine to come up with optimal execution plan. DBA quite often ignore statistics thinking it does not need to be updated, as they are automatically maintained if proper database setting is configured (auto update and auto create). Well, in this question, we have scenario even though auto create and auto update statistics are ON, statistics is not updated. There are multiple solutions but what will be your solution in this case? When to use Function and When to use Stored Procedure This question is rather open ended question – there is no right or wrong answer. Everybody developer has always used functions and stored procedures. Here is the chance to justify when to use Stored Procedure and when to use Functions. I want to acknowledge that they can be used interchangeably but there are few reasons when one should not do that. There are few reasons when one is better than other. Let us discuss this here. Your opinion matters. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco

    - by stephen.schleifer
    First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco | June 23-25 This course enables participants to use Oracle Imaging and Process Management (I/PM) 11g to access, track, and annotate documents. In addition, they also get an overview of the product architecture of Oracle I/PM running on Oracle WebLogic Server.The course also delves into administration tasks such as security permissions, configuration such as creating BPEL connections, and procedures for creating applications, searches, and input mappings. Customer and partners can register by looking up the course (#D61575GC10) on http://education.oracle.com

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • What would a start-to-finish development procedure would look like?

    - by Tom Busby
    I have a problem that my developer friends share. We recently left university and find ourselves either end up working for a firm which already has good procedures (TDD, automated testing, proper agile development, etc) or working for a firm which doesn't. I want to learn some of these vital skills and get a grip on what a complete start-to-finish development procedure would look like. What differences would be between a smaller project, and a long term project with many team members.

    Read the article

  • Where can I find the application executables in the filesystem?

    - by richzilla
    Where are executables for programs stored in Ubuntu? An application (Komodo Edit) is asking me to identify an application to be used as a web browser. I've become used to just entering the application name as a command for situations such as these, but this scenario got me thinking. I know in Windows it would just be the relevant application folder in the 'program files' folder, but I'm assuming things are a bit different on Linux? I thought somewhere like bin would be logical but this appears to standard Linux/Unix applications. Where would I find the binary executable for applications stored on my system?

    Read the article

  • SQL Server data platform upgrade - Why upgrade and how best you can reduce pre & post upgrade problems?

    - by ssqa.net
    SQL Server upgrade, let it be database(s) or instance(s) or both the process and procedures must follow best practices in order to reduce any problems that may occur even after the platform is upgraded. The success of any project relies upon the simpler methods of implementation and a process to reduce the complexity in testing to ensure a successful outcome. Also the topic has been a popular topic that .... read more from here ......(read more)

    Read the article

  • T-SQL in SQL Azure

    - by kaleidoscope
    The following table summarizes the Transact-SQL support provided by SQL Azure Database at PDC 2009: Transact-SQL Features Supported Transact-SQL Features Unsupported Constants Constraints Cursors Index management and rebuilding indexes Local temporary tables Reserved keywords Stored procedures Statistics management Transactions Triggers Tables, joins, and table variables Transact-SQL language elements such as Create/drop databases Create/alter/drop tables Create/alter/drop users and logins User-defined functions Views, including sys.synonyms view Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags   Amit, S

    Read the article

  • Security of keyctl

    - by ftiaronsem
    Hello alltogether Today I set up an ecryptfs directory, which is automatically mounted at login via pam. To do so i followed the guide in the ecryptfs readme ecryptfs-readme To sum up, I now have a key stored in the usser session keyring. The first thing I do not understand is why this key is only showing up via keyctl show and not with the gnome-gui "Passwords and encryption keys". The second thing I am curious about is the security. I assume that my passphrase is somehow stored on the harddisk. But how exactly and how secure is this? Thanks in advance

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >