Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 582/4078 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Windows Server 2008 R2 File Permissions

    - by Fly_Trap
    I’m having some problems understanding some particular file permissions behaviour. Here are the steps to reproduce: Log into the server using the default Administrator account Create a text file (testfile.txt) in C:\ProgramData containing some arbitrary text Create a new user account and make it a member of the Administrators group Log in using new account and open C:\ProgramData\testfile.txt Edit the text and try to save Upon clicking save I’m presented with the save as dialog, which indicates that i do not have the necessary permissions to edit the file. This seems odd considering that the new user account is a member of Administrators. When I view the permissions of the file I can see the there are three groups listed, System, Administrators and Users. SYSTEM and Administrators have full permissions, however, Users only has the Read & Execute and Read permissions checked. It would appear that when I open the testfile.txt from the new users account, it opens in the context of the Users group, despite being a member of Administrators, is this correct? It would certainly explain the behaviour. The reason that this is an issue for me is that if I deploy an application via 'Run as Administrator', will normal users be able to edit the text files I install to ProgramData. Is this behaviour confined to Windows server or is it the same in Vista and Win7.

    Read the article

  • WPF C# Client/Server announcement system

    - by manemawanna
    I'm currently in the process of creating an announcement system at my place of work. The role of this system will be to replace all users email due to people misusing it and generally abusing the facility. The system will consist of: Web Portal: Will allow staff to enter any important announcements (this will be restricted via AD). SQL Server 2k5 DB: Will hold the announcements along with records of staff members and if they've read the announcements etc. Front End: Created in WPF & C# which is nearly complete, it will display the announcements to the users. Web Page: Client will contact every so often, which will return an xml file for the client to read. However my boss has now shifted the goal posts and would like the announcements to appear to the user once they are written to the database, rather than waiting on the client to contact the webpage. So now I'm a bit unsure as to how to go about this. I have one idea where I would create a small server application to monitor for new announcements then contact the clients to inform them to approach the website for the information they need. But I'm just looking to see if theres a better or more efficient way to do this or if someone else has a more appropriate idea or suggestion.

    Read the article

  • Server side aggregation using Spring MVC Controllers

    - by Venu
    Hi! We have a web site with multiple pages each with lot of AJAX calls. All the Ajax calls go to a data proxy layer written in Spring 3 MVC application. In order to reduce the load on the server we are planning to aggregate calls on the client and send to the proxy layer. For ex: we have two calls /controller1/action1 and /controller2/action2 in different controllers. I want to write an aggregate controller and call it from the client instead of making two calls to the individual controllers. Something like /aggregatecontroller/aggregate (and we will post the information regarding which calls are being aggregated and required parameters). Is there any way we can call another controller/action from a controller/action and get output? The flow will be something like this: a. Call /aggregatecontroller/aggregate with info regd call1 and call2 b. aggregate action understands that it needs to aggregate call1 and call2 c. it calls /controller1/action1 and gets the response d. it calls /controller2/action2 and gets the response e. it merges both in a json response and sends it back to browser. Please let me know if you have any suggestion regd how to do this or if you think there is a better approach to do the server side aggregation. Thanks for your time.

    Read the article

  • Thumbnail image saved with worse quality on Windows Server 2003

    - by Angelo
    Hello, In asp.net 2.0 application I am trying to create thumbnails from uploaded images. However when I test the application on my PC under Windows7 it works fine, but on the real Windows 2003 Server the resized image has worse quality. Where this difference could come from? Different JPEG codec or what, if Yes how it can be updated on Win 2003 Server? Thanks! Here is the code: Resize of the Image: Bitmap newBmp = new Bitmap(imgWidth, imgHeight, PixelFormat.Format24bppRgb); newBmp.SetResolution(inputBmp.HorizontalResolution, inputBmp.VerticalResolution); //Create a graphics object attached to the new bitmap Graphics newBmpGraphics = Graphics.FromImage(newBmp); newBmpGraphics.InterpolationMode = InterpolationMode.HighQualityBicubic; newBmpGraphics.SmoothingMode = SmoothingMode.HighQuality; newBmpGraphics.PixelOffsetMode = PixelOffsetMode.HighQuality; newBmpGraphics.DrawImage(inputBmp, new Rectangle(0, 0, imgWidth, imgHeight), new Rectangle(0, 0, inputBmp.Width, inputBmp.Height), GraphicsUnit.Pixel); Save of the Image: System.IO.Stream imgStream = new System.IO.MemoryStream(); //Get the ImageCodecInfo for the desired target format ImageCodecInfo destCodec = FindCodecForType(ImageMimeTypes.JPEG); if (destCodec == null) { //No codec available for that format throw new ArgumentException("The requested format image/jpeg does not have an available codec installed", "destFormat"); } //Create an EncoderParameters collection to contain the //parameters that control the dest format's encoder EncoderParameters destEncParams = new EncoderParameters(1); EncoderParameter qualityParam = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality,(long)quality); destEncParams.Param[0] = qualityParam; //Save w/ the selected codec and encoder parameters inputBmp.Save(imgStream, destCodec, destEncParams); Bitmap destBitmap = new Bitmap(imgStream);

    Read the article

  • Compiling a C++ application on Windows 7, but execute it on Win2003 Server

    - by dabs
    I have a C++ application (quite complex, multiple projects) in Visual Studio 2008, that produces a single dll. Recently I switched to Windows 7, but had previously been compiling under Windows XP. Suddenly the dll in question cannot be loaded by another application, i.e. on a machine running Windows 2003 Server. I've been trying various things: I've installed the VC9.0 redistributable package on the server Also copied various .dll's from that package to the application folder The project is of course compiled in release mode When I run depends.exe on the client machine, I do get the following error: "Error: The Side-by-Side configuration information for "my_dll.dll" contains errors. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem (14001). Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." and the icon for shlwapi.dll has a red overlay icon. This didn't happen when I was compiling under WinXP, so I'm guessing that there really is no problem with the .dll's on the client machine, but somewhere there is a reference to that particular version of some dll. Does anyone know what would be the best way to resolve this? Regards, Daníel

    Read the article

  • jquery dialog calls a server side method with button parameters

    - by Chiefy
    Hello all, I have a gridview control with delete asp:ImageButton for each row of the grid. What I would like is for a jquery dialog to pop up when a user clicks the delete button to ask if they are sure they want to delete it. So far I have the dialog coming up just fine, Ive got buttons on that dialog and I can make the buttons call server side methods but its getting the dialog to know the ID of the row that the user has selected and then passing that to the server side code. The button in the page row is currently just an 'a' tag with the id 'dialog_link'. The jquery on the page looks like this: $("button").button(); $("#DeleteButton").click(function () { $.ajax({ type: "POST", url: "ManageUsers.aspx/DeleteUser", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { // Replace the div's content with the page method's return. $("#DeleteButton").text(msg.d); } }); }); // Dialog $('#dialog').dialog({ autoOpen: false, width: 400, modal: true, bgiframe: true }); // Dialog Link $('#dialog_link').click(function () { $('#dialog').dialog('open'); return false; }); The dialog itself is just a set of 'div' tags. Ive thought of lots of different ways of doing this (parameter passing, session variable etc...) but cant figure out how to get any of them working. Any ideas are most welcome As always, thanks in advance to those who contribute.

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • FREE! SQL Scripts Manager – a Christmas gift from Red Gate

    Red Gate has released SQL Scripts Manager, a free tool containing over 25 scripts written by SQL Server experts, to help you automate common troubleshooting, diagnostic, and maintenance tasks. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Re-configure Office 2007 installation unattended: Advertised components --> Local

    - by abstrask
    On our Citrix farm, I just found out that some sub-components are "Installed on 1st Use" (Advertised), which does play well on terminal servers. Not only that, but you also get a rather non-descriptive error message, when a document tried to use a component, which is "Installed on 1st Use" (described on Plan to deploy Office 2010 in a Remote Desktop Services environment): Microsoft Office cannot run this add-in. An error occurred and this feature is no longer functioning correctly. Please contact your system administrator. I have ~50 Citrix servers where I need to change the installation state of all Advertised components to Local, so I created an XML file like this: <?xml version="1.0" encoding="utf-8"?> <Configuration Product="ProPlus"> <Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /> <Logging Type="standard" Path="C:\InstallLogs" Template="MS Office 2007 Install on 1st Use(*).log" /> <Option Id="AccessWizards" State="Local" /> <Option Id="DeveloperWizards" State="Local" /> <Setting Id="Reboot" Value="NEVER" /> </Configuration> I run it with a command like this (using the appropriate paths): "[..]\setup.exe" /config ProPlus /config "[..]\Install1stUse-to-Forced.xml" According to the log file, the syntax appears to be accepted and the config file parsed: Parsing command line. Config XML file specified: [..]\Install1stUse-to-Forced.xml Modify requested for product: PROPLUS Parsing config.xml at: [..]\Install1stUse-to-Forced.xml Preferred product specified in config.xml to be: PROPLUS But the "Final Option Tree" still reads: Final Option Tree: AlwaysInstalled:local Gimme_OnDemandData:local ProductFiles:local VSCommonPIAHidden:local dummy_MSCOMCTL_PIA:local dummy_Office_PIA:local ACCESSFiles:local ... AccessWizards:advertised DeveloperWizards:advertised ... And the components remain "Advertised". Just to see if the installation state is overridden in another XML file, I ran: findstr /l /s /i "AccessWizards" *.xml Against both my installation source and "%ProgramFiles%\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller", but just found DefaultState to be "Local". What am I doing wrong? Thanks!

    Read the article

  • Give Access to a Subdirectory Without Giving Access to Parent Directories

    - by allquixotic
    I have a scenario involving a Windows file server where the "owner" wants to dole out permissions to a group of users of the following sort: \\server\dir1\dir2\dir3: Read & Execute and Write \\server\dir1\dir2: No permissions. \\server\dir1: No permissions. \\server: Read & Execute To my understanding, it is not possible to do this because Read & Execute permission must be granted to all the parent directories in a directory chain in order for the operating system to be able to "see" the child directories and get to them. Without this permission, you can't even obtain the security context token when trying to access the nested directory, even if you have full access to the subdirectory. We are looking for ways to get around this, without moving the data from \\server\dir1\dir2\dir3 to \\server\dir4. One workaround I thought of, but which I am not sure if it will work, is creating some sort of link or junction \\server\dir4 which is a reference to \\server\dir1\dir2\dir3. I am not sure which of the available options (if any) would work for this purpose if the user does not have Read & Execute permission on \\server\dir1\dir2 or \\server\dir1, but as far as I know, the options are these: NTFS Symbolic Link, Junction, Hard Link. So the questions: Are any of these methods suitable to accomplish my goal? Are there any other methods of linking or indirectly referencing a directory, which I haven't listed above, which might be suitable? Are there any direct solutions that don't involve granting Read & Execute to \\server\dir1 or \\server\dir2 but still allowing access to \\server\dir1\dir2\dir3?

    Read the article

  • Synchronizing the SamAccountName Property using Windows Azure Active Directory Sync Tool

    - by pk.
    Using this official documentation as a guide, I would expect the SamAccountName property to sync from my on-premise AD to Office 365. I think that it used to do exactly that, but now it seems that it doesn't so much sync the attribute as it does create an entirely new, unlinked value and store it in Office 365. This has caused some minor issues for me (broken scripts, annoying permissions management, etc.) and may be part of a more major issue regarding ADFS authentication. On-Premise PS C:\Windows\system32> Get-ADUser jdoe -Properties SamAccountName | fl SamAccountName SamAccountName : jdoe Office 365 Sync'ed Objects PS C:\Windows\system32> Get-Mailbox jdoe | fl SamAccountName SamAccountName : $1A7H20-K1LCOJFFBHGS I understand how to work around this issue in my scripts -- there exists the ImmutableId property which can be mapped back to the on-premise GUID. As far as the issue I'm having with ADFS, I'm less certain how to proceed and if this is causing my issues. At this point I really would just like some verification that I'm not crazy and that this used to be sync'ed at some point in the past and that Office 365 broke it relatively recently. I also think that MS documentation should perhaps be updated to exclude SamAccountName from the list of synchronized properties on the page I linked.

    Read the article

  • Cannot connect Linux XAMPP PHP to SQL Server database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and SQL Server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • pass username and password to get-credential or run sql query without using invoke-sqlcmd in Powersh

    - by Emo
    I am trying to connect to a remote sql database and simply run the "select @@servername" query in Powershell. I'm trying to do this without using integrated security. I've been struggling with "get-credential" and "invoke-sqlcmd", only to find (I think), that you can't pass the password from "get-credential" to another Powershell cmdlets. Here's the code I'm using: add-pssnapin sqlserverprovidersnapin100 add-pssnapin sqlservercmdletsnapin100 load assemblies [Reflection.Assembly]::Load("Microsoft.SqlServer.Smo, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.SqlEnum, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.SmoEnum, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.ConnectionInfo, Version=9.0.242.0, Culture=neutral,PublicKeyToken=89845dcd8080cc91") connect to SQL Server $serverName = "HLSQLSRV03" $server = New-Object -typeName Microsoft.SqlServer.Management.Smo.Server -argumentList $serverName login using SQL authentication $server.ConnectionContext.LoginSecure=$false; $credential = Get-Credential $userName = $credential.UserName -replace("\","") $pass = $credential.Password $server.ConnectionContext.set_Login($userName) $server.ConnectionContext.set_SecurePassword($credential.Password) $DB = "Master" invoke-sqlcmd -query "select @@Servername" -database $DB -serverinstance $servername -username $username -password $pass If if just hardcode the password in at the end of the "invoke-sqlcmd" line, it works. Is this because you can't use "get-credential" with "invoke-sqlcmd"? If so...what are my alternatives? Thanks so much for you help Emo

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • warning BC40056: Namespace or type specified in the Imports 'MS.Internal.Xaml.Builtins' doesn't cont

    - by Heinzi
    I have a WPF VB.NET project in Visual Studio 2008. For some reason, Visual Studio thinks that it needs to add an Imports MS.Internal.Xaml.Builtins to every auto-generated XAML partial class (= the nameOfXamlFile.g.vb files), resulting in the following warning: warning BC40056: Namespace or type specified in the Imports 'MS.Internal.Xaml.Builtins' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. Make sure the imported element name doesn't use any aliases. I can remove the Imports line, but, since this is an auto-generated file, it reappears every time that the project is rebuilt. This warning message is annoying and clutters my error list. Is ther something that can be done about it? Or is it a known bug?

    Read the article

  • SVN on Team Foundation Server - Delphi 2010

    - by BahaiResearch.com
    We're looking to upgrade to Delphi 2010 and have Team Foundation Server as our source Control. Is there a plug in for TFS that allows clients to talk to it via SVN? I noticed that CodePlex, Microsoft's open source web service, supports TFS and SVN so am hoping that there is a SVN plug in for TFS. Ian

    Read the article

  • ruby script/server not reading RAILS_ENV option

    - by iwan
    Hello, I tried to run ruby script/server RAILS_ENV=production but somehow it always try to read "development" config.. nothings wrong with RAKE XXX RAILS_ENV=production (trying to read production config). Any idea how to troubleshoot? I have my other rails app in the same machine and it works fine. The problem above only happen for redmine rails. Thanks in advance. -iwan

    Read the article

  • MS Chart Control Scale - Line graph show 12 months

    - by Mike
    Hi, On my X Axis, I have months. The chart shows up to 11 points, i.e. Jan - Nov of the same year, but when I add 12 points (Jan - Dec), it will do an auto label thing and change the interval for every 4 months. How can I change the graph so that it shows 12 months before it does the auto labels? Here is the server control code I am currently using. <asp:CHART ID="Chart1" runat="server" BorderColor="181, 64, 1" BorderDashStyle="Solid" BorderWidth="2" Height="296px" ImageLocation="~/TempImages/ChartPic_#SEQ(300,3)" ImageType="Png" Palette="None" Width="700px" BorderlineColor=""> <legends> <asp:Legend BackColor="Transparent" Font="Trebuchet MS, 8pt, style=Bold" IsTextAutoFit="False" Name="Default" Alignment="Center" DockedToChartArea="ChartArea1" Docking="Top" IsDockedInsideChartArea="False" Title="Legend"> </asp:Legend> </legends> <series> <asp:Series BorderColor="180, 26, 59, 105" BorderWidth="2" ChartType="Line" Color="220, 65, 140, 240" MarkerSize="6" Name="Series1" ShadowColor="Black" ShadowOffset="2" XValueType="DateTime" YValueType="Double" LabelFormat="c0" LegendText="Actual" MarkerStyle="Circle"> </asp:Series> <asp:Series BorderColor="180, 26, 59, 105" BorderWidth="2" ChartType="Line" Color="220, 224, 64, 10" MarkerSize="6" Name="Series2" ShadowColor="Black" ShadowOffset="2" XValueType="DateTime" YValueType="Double" LabelFormat="c0" LegendText="Projected" MarkerStyle="Circle"> </asp:Series> <asp:Series BorderColor="180, 26, 59, 105" BorderWidth="2" ChartArea="ChartArea1" ChartType="Line" Legend="Default" Name="Series3" LabelFormat="c0" XValueType="DateTime" YValueType="Double" Color="0, 192, 192" MarkerSize="6" ShadowColor="Black" ShadowOffset="2" LegendText="Actual Credit Limit" MarkerStyle="Circle"> </asp:Series> </series> <chartareas> <asp:ChartArea BackColor="#DEEDF7" BackGradientStyle="TopBottom" BackSecondaryColor="White" BorderColor="64, 64, 64, 64" BorderDashStyle="Solid" Name="ChartArea1" ShadowColor="Transparent"> <area3dstyle inclination="40" isclustered="False" isrightangleaxes="False" lightstyle="Realistic" perspective="9" rotation="25" wallwidth="3" /> <axisy linecolor="64, 64, 64, 64" islabelautofit="False" isstartedfromzero="False"> <LabelStyle Font="Trebuchet MS, 8.25pt, style=Bold" Format="c0" /> <majorgrid linecolor="64, 64, 64, 64" /> </axisy> <axisx linecolor="64, 64, 64, 64" intervaloffsettype="Months" intervaltype="Months" islabelautofit="False" isstartedfromzero="False"> <LabelStyle Font="Trebuchet MS, 8.25pt, style=Bold" Angle="-60" Format="MMM yy" /> <majorgrid linecolor="64, 64, 64, 64" /> </axisx> </asp:ChartArea> </chartareas> </asp:CHART> Thanks.

    Read the article

  • Crystal Report not working on Windows server 2008

    - by gofor.net
    Crystal Report is working fine on local database,but its not working on windows server 2008 when i deployed it on IIS 7. I have run Crystal report run time also.I copied CrystalDecisions.CrystalReports.Engine.dll, CrystalDecisions.ReportSource.dll and CrystalDecisions.Shared.dll from C:\Program Files (x86)\Business Objects\Common\3.5\managed\dotnet2 into bin folder of my application.Still its not showing anything and not throwing any error also. It will be very kind if someone help me. Thanking you.

    Read the article

  • SWFUpload multiple files server-side handling

    - by Chau
    I need the user to be able to upload multiple files to my server, thus I am using the SWFUpload utility. SWFUpload sends the files one by one, and I need to store them all in the same temporary directory. My ASP.NET handler recieves the files one by one and I can store the file appropriately. My problem is: How do I know which files belong to the same upload? Rephrased, how do I connect the files in my handler?

    Read the article

  • Unique ID for MS Word 2007 paragraph

    - by Ganish
    I am writing large MS Word 2007 documents, which are often being changed. I have to number paragraphs with stationary unique numbers, that will not change while changing the documents. The numbers should be unique, and will not change even if previous numbers are deleted. The order of the list is not mandatory, and addition of a new number before existing numbers is possible (for instance: the sequence 1, 4, 3 means that paragraphs 1-3 were written, then #2 was deleted, then #5 was added. #3 was not affected by the later editing) The mechanism should be internal to the document, as I am working on line and off line. The numbers are allocated to every document individually. Since I don't know to program under MS Word, I'd appreciate getting a complete solution.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >