Search Results

Search found 102772 results on 4111 pages for 'sql server 2008'.

Page 205/4111 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • Can only connect to sql server express 2012 via named pipes

    - by YetAnotherDeveloper
    I have sql server express 2012 installed on windows 2008, locally everything works just fine i can connect via tcpip and named pipes. Remotely i can connect with ssms only using named pipes. I have tried disabling the firewall on both sides to eliminate blocking traffic. i have toggled the tcpip setting on and off (i read somewhere that they got it working just but flipping them off and back on). I have double/triple checked all the settings that i'm aware of and everything seems to be correct. Tcp is enabled Tcp port is set to 1433, udp port is set to 1434 Server has static ip Start up log says: Server is listening on [ 'any' 1433]. Firewall rules are in place Any suggestions on things that i can look into? i have really just run out of ideas.

    Read the article

  • How do I find a domain server on my client right away?

    - by SantaC
    This is a problem that I noticed. I have a Windows 2008 R2 Server and joined it to my windows 7 client. Now when I am trying to reach my "share" that i created in the Win2008 server, it does not show up at the Network tab in Windows 7. Instead, the only way I found it was to manually type in \\myserverlocation in the run prompt. Is there any way to find my share right way without doing this?

    Read the article

  • how to solve unhandled exception error when using visual C++ 2008?

    - by make
    Hi, Could someone please help me to solve unhandled exception error when using visual C++ 2008? the error is displayed as follow: Unhandled exception at 0x00411690 in time.exe: 0xC0000005: Access violation reading location 0x00000008 Actually when I used visual c++ 6 in the past, there weren't any error and the program was running fine. But now ehen I use visual 2008, I am getting this Unhandled exception error. Here is the program: #include <stdio.h> #include <stdlib.h> #include <time.h> #ifdef _WIN32 // #include <winsock.h> #include <windows.h> #include "stdint.h" // typedef __int64 int64_t // Define it from MSVC's internal type // typedef unsigned __int32 uint32_t #else #include <stdint.h> // Use the C99 official header #include <sys/time.h> #include <unistd.h> #endif #if defined(_MSC_VER) || defined(_MSC_EXTENSIONS) #define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 #else #define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL #endif struct timezone { int tz_minuteswest; /* minutes W of Greenwich */ int tz_dsttime; /* type of dst correction */ }; #define TEST #ifdef TEST uint32_t stampstart(); uint32_t stampstop(uint32_t start); int main() { uint32_t start, stop; start = stampstart(); /* Your code goes here */ stop = stampstop(start); return 0; } #endif int gettimeofday(struct timeval *tv, struct timezone *tz) { FILETIME ft; unsigned __int64 tmpres = 0; static int tzflag = 0; if (NULL != tv) { GetSystemTimeAsFileTime(&ft); tmpres |= ft.dwHighDateTime; tmpres <<= 32; tmpres |= ft.dwLowDateTime; tmpres /= 10; /*convert into microseconds*/ /*converting file time to unix epoch*/ tmpres -= DELTA_EPOCH_IN_MICROSECS; tv->tv_sec = (long)(tmpres / 1000000UL); tv->tv_usec = (long)(tmpres % 1000000UL); } if (NULL != tz) { if (!tzflag) { _tzset(); tzflag++; } tz->tz_minuteswest = _timezone / 60; tz->tz_dsttime = _daylight; } return 0; } uint32_t stampstart() { struct timeval tv; struct timezone tz; struct tm *tm; uint32_t start; gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); printf("TIMESTAMP-START\t %d:%02d:%02d:%d (~%d ms)\n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec, tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000); start = tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000; return (start); } uint32_t stampstop(uint32_t start) { struct timeval tv; struct timezone tz; struct tm *tm; uint32_t stop; gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); stop = tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000; printf("TIMESTAMP-END\t %d:%02d:%02d:%d (~%d ms) \n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec, tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000); printf("ELAPSED\t %d ms\n", stop - start); return (stop); } thanks for your replies:

    Read the article

  • The best, in the West

    - by Fatherjack
    As many of you know, I run the SQL South West user group and we are currently in full flow preparing to stage the UK’s second SQL Saturday. The SQL Saturday spotlight is going to fall on Exeter in March 2013. We have full-day session on Friday 8th with some truly amazing speakers giving their insights and experience into some vital areas of working with SQL Server: Dave Ballantyne and Dave Morrison – TSQL and internals Christian Bolton and Gavin Payne – Mission critical data platforms on Windows Server 2012 Denny Cherry – SQL Server Security André Kamman – Powershell 3.0 for SQL Server Administrators and Developers Mladen Prajdic – From SQL Traces to Extended Events – The next big switch. A number of people have claimed that the choice is too good and they’d have trouble selecting just one session to attend. I can see how this is a problem but hope that they make their minds up quickly. The venue is a bespoke conference suite in the centre of Exeter but has limited capacity so we are working on a first-come first-served basis. All the session details and booking and travel information can be found on our user group website. The Saturday will be a day of free, 50 minute sessions on all aspects SQL Server from almost 30 different speakers. If you would like to submit a session then get a move on as submissions close on 8th January 2013 (That’s less than a month away). We are really interested in getting new speakers started so we have a lightning talk session where you can come along and give a small talk (anywhere from 5 to 15 minutes long) about anything connected with SQL Server as a way to introduce you to what it’s like to be a speaker at an event. Details on registering to attend and to submit a session (Lightning talks need to be submitted too please) can be found on our SQL Saturday pages. This is going to be the biggest and best bespoke SQL Server conference to ever take place this far South West in the UK and we aim to give everyone who comes to either day a real experience of the South West so we have a few surprises for you on the day.

    Read the article

  • Stairway to SQL Server Security: Level 1, Overview of SQL Server Security

    The ubiquity of databases and the potentially valuable information stored in them makes them attractive targets for people who want to steal data or harm its owner by tampering with it. Making sure that your data is secure is a critical part of configuring SQL Server and developing applications that use it to store data. 12 must-have SQL Server toolsThe award-winning SQL Developer Bundle contains 12 tools for faster, simpler SQL Server development. Download a free trial.

    Read the article

  • SQL Server Max TinyInt Value

    - by Derek Dieter
    The maximum value for a tinyint in SQL Server is: 0 through 255 And the byte size is: 1 byte other maximum values: BigInt: -9223372036854775808 through 9223372036854775807 (8 bytes) Int: -2147483648 through 2147483647 (4 bytes) SmallInt: -32768 through 32767 (2 bytes) Related Posts:»SQL Server Max SmallInt Value»SQL Server Max Int Value»SQL Server Bigint Max Value»Create Date Table»Dynamic Numbers Table

    Read the article

  • SQL Server Central Webinar #17: Monitoring your SQL Server Instances

    Join SQL Server MVP Brad McGehee to learn why it is so important to monitor your SQL Server instances and what you should consider when starting out. This webinar will also show you how you can use Red Gate's SQL Monitor for SQL Server monitoring and will include an overview of the new features released in v3.0, including the ability to create your own custom metric definition and to support different access permissions.

    Read the article

  • Stairway to SQL PowerShell Level 7: SQL Server PowerShell and the Basics of SMO

    In this level we begin our journey into the SQL Server SMO space. SMO stands for Shared Management Objects and is a library written in .NET for use with SQL Server. The SMO library is available when you install SQL Server Management Tools or you install it separately. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • TRY CATCH with Linked Server in SQL Server 2005 Not Working

    - by Robert Stanley
    Hello, I am trying to catch sql error raised when I execute a stored procedure on a linked server. Both Servers are running SQL Server 2005. To prove the issue I have created a stored procedure on the linked server called Raise error that executes the following code: RAISERROR('An error', 16, 1); If I execute the stored procedure directly on the linked server using the following code I get a result set with 'An error', '16' as expected (ie the code enters the catch block): BEGIN TRY EXEC [dbo].[RaiseError]; END TRY BEGIN CATCH DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int; SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY(); SELECT @ErrMsg, @ErrSeverity; END CATCH If I run the following code on my local server to execute the stored procedure on the linked server then SSMS gives me the message 'Query completed with errors', .Msg 50000, Level 16, State 1, Procedure RaiseError, Line 13 An error' BEGIN TRY EXEC [Server].[Catalog].[dbo].RaiseError END TRY BEGIN CATCH DECLARE @SPErrMsg nvarchar(4000), @SPErrSeverity int; SELECT @SPErrMsg = ERROR_MESSAGE(), @SPErrSeverity = ERROR_SEVERITY(); SELECT @SPErrMsg, @SPErrSeverity; END CATCH My Question is can I catch the error generated when the Linked server stored procedure executes? Thanks in advance!

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • ASP.Net performance counters

    - by nikolaosk
    I was involved in designing and implementing an ASP.Net application some time ago. After we deployed the application we wanted to monitor various aspects of the application. We can use the Performance Monitor. In my windows Server 2008 machine, I go to Start-Run and type " perfmon " and the Performance monitor window pops up. There are thousands of counters in there and it is impossible for anyone to know them all. Most people I know use the Performance Monitor to add counters to monitor SQL Server...(read more)

    Read the article

  • S#arp Architecture 1.5.2 released

    - by AlecWhittington
    It has been a few weeks since S#arp Architecture 1.5 RTM has been released. While it was a major success a few issues were found that needed to be addressed. These mostly involved the Visual Studio templates. What's new in S#arp Architecture 1.5.2? Merged the SharpArch.* assemblies into a single assembly (SharpArch.dll) Updated both VS 2008 and 2010 templates to reflect the use of the merged assembly Updated SharpArch.build with custom script that allows the merging of the assemblies. Copys new merged...(read more)

    Read the article

  • Migrating Databases from SQL Server 2008 to SqlAzure

    - by kaleidoscope
    Connect SQL Azure through SSMS. (It will get connected, only if you have port 1433 open.) Create required databases on SQL Azure. Create and execute schema scripts for databases.(Make sure you have written scripts which are 100% compatible with SQL Azure. Tool called SQL Azure migration tool by CodePlex helps with doing this.) Create and execute insert scripts for databases. (One can use SSMS for generating insert scripts.) For further details refer to the Windows Azure Platform Kit Nov 2009.   Anish

    Read the article

  • Change my MX record on my server to google MX?

    - by Dejan.S
    Hi. I got a windows server 2008 where I host a site, now I decided to have the email on google apps. I did add the MX records I get from them to my DNS settings on the server but with no luck. I recently started doing server stuff so I did like this. Server Manager / Roles / DNS Server / DNS / SERVERNAME / MYDOMAIN / Forward Lookup / New MX Host or child domain: What goes here? FQDN: here is my domain name, i think because I named the ns my domain? FQDN MX: here is the google MX record I got from them MSP: 10 I have no Idea where I go wrong but I thought I would ask you guys if any of you can maybe give me some tips on what to look for or any newbee mistake I do that you see from this little info. I really appreciate all help I could get on this.

    Read the article

  • SQL Server for the Oracle DBA Links

    - by BuckWoody
    I do a presentation (and a class) called "SQL Server for the Oracle DBA". It's a non-marketing overview that gives you the basics of working with SQL Server if you're already familiar wtih how Oracle works. This class and these links DO NOT help you with "Why should I use Oracle/SQL Server instead of Oracle/SQL Server" - I'll assume you're already there, and if not, there are LOTS of sites to help you make that decision. Although these links might contain slight marketing slants (I don't control them) I've tried to get the best links I can. Feel free to comment here to add more/better links. As such, these aren't links that help you work with Oracle - they are links to help you work with SQL Server. Some of them contain more information than you actually need, others don't have near enough. Taken together (and with the class) you're able to get done what you need to do. "Practical SQL Server for Oracle Professionals" - A Microsoft Whitepaper, probably the best place to get started: http://download.microsoft.com/download/6/9/d/69d1fea7-5b42-437a-b3ba-a4ad13e34ef6/SQLServer2008forOracle.docx Free Training: http://technet.microsoft.com/en-us/sqlserver/dd548020.aspx Classroom training (will cost you): http://www.microsoft.com/learning/en/us/course.aspx?ID=50068A&locale=en-us Terminology Differences: http://www.associatedcontent.com/article/2383466/oracle_and_sql_server_basic_terminology.html Datatype mapping between Oracle and SQL Server: http://msdn.microsoft.com/en-us/library/ms151817.aspx The "other" direction - can still be useful for the Oracle professional to see the other side: http://blog.benday.com/archive/2008/10/23/23195.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • Membership in ASP.Net applications - part 4

    - by nikolaosk
    This is the fourth post in a series of posts regarding ASP.Net built in membership functionality,providers,controls. You can read the first one here . You can read the second post here . You can read the third post here . In this post I will show you how to add users programmatically to a role. In the third post we saw how to get users in a specific role.I will also show you how to delete a user and a role programmatically. 1) Launch Visual Studio 2005,2008/2010. Express editions will work fine....(read more)

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >