Search Results

Search found 21664 results on 867 pages for 'process innovation'.

Page 558/867 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • What components of your site do you typically "offload" or embed?

    - by Chad
    Here's what I mean. In developing my ASP.NET MVC based site, I've managed to offload a great deal of the static file hosting and even some of the "work". Like so: jQuery for my javascript framework. Instead of hosting it on my site, I use the Google CDN Google maps, obviously "offloaded" - no real work being performed on my server - Google hosted jQueryUI framework - Google CDN jQueryUI CSS framework - Google CDN jQueryUI CSS framework themes - Google CDN So what I'm asking is this, other than what I've got listed... What aspects of your sites have you been able to offload, or embed, from outside services? Couple others that come to mind... OpenAuth - take much of the authentication process work off your site Google Wave - when it comes out, take communication work off of your site

    Read the article

  • Asp.net fileupload control postback problems

    - by Spooky2010
    using ASP.net, vs2008 C#. Im using a FileUpload control on a webform. The uploading of a file (ie PDF dcouments) to a server directory works ok. I have on the webform a "preview" button that the user can use to preview the PDF file after they have selected it via the Fileupload browse feature. I do this by if (this.FileUpload1.HasFile) { localURL = FileUpload1.PostedFile.FileName; // use this to preview file. Other methods are restricted by local security requirements Process.Start(localURL); } My problems is that after the button click Postback occurs the location of the selected file disappears from the textbox part of the Fileupload control. How can i keep this info there, so the user does not have to browse again and instead can just click upload to upload the file. Any help appreciated thanks

    Read the article

  • Assignment in python for loop possible?

    - by flyingcrab
    I have a dictionary d (and a seperate sorted list of keys, keys). I wanted the loop to only process entries where the value is False - so i tried the following: for key in keys and not d[key]: #do foo I suppose my understanding of python sytax is not what i thought it was - because the assignment doesnt suppose to have happened above, and a i get an instanciation error. The below works of course, but I'd really like to be able to use something like the code above.. possible? for key in keys: if d[key]: continue #foo time! Thanks!

    Read the article

  • Eclipse CDT on Snow Leopard cannot find binaries

    - by ejel
    After upgraded to Snow Leopard, I can no longer run Eclipse CDT project on my computer. While the build process completes without any error, Eclipse does not recognize the binary file it created. When try to point to the binary file in Run Configuration.. dialog, it cannot find any binary in the project. Though executing the file from Terminal works fine. According to a post at on Eclipse forum, this might be a problem that Mach-O parser does not recognize 64-bit binaries. Does anyone know what are the solutions or workarounds to the problem so that I can run/debug my C++ projects on Snow Leopard. UPDATED The solution suggested by Shane, though allowing the binary created to be recognized, does introduce another problem. Since system libraries in Snow Leopard are all 64 bits, it is no longer possible to link the code created with -arch i386 with these libraries, and hence not a feasible solution yet.

    Read the article

  • SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing

    - by user197529
    An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill. At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data. Any suggestions are greatly appreciated. Kind Regards, Bernie

    Read the article

  • Which audio library to use?

    - by Jeb
    I want to build a .Net application for processing audio, and distribute it using ClickOnce deployment. I need access to a raw audio pipeline. Which audio library should I be using? I've heard the managed libraries for DirectSound are a dead end. I need as little as possible to be installed on the client's machine. Anything outside of the ClickOnce process isn't going to work. NAudio might be a possibility, but isn't there potentially a separate driver install? There's also SlimDX. It's a shame -- the managed DirectX libraries seem to work nicely and from what I've read, DirectX can be included in the ClickOnce install.

    Read the article

  • Get signal names from numbers in Python

    - by Brian M. Hunt
    Is there a way to map a signal number (e.g. signal.SIGINT) to its respective name (i.e. "SIGINT")? I'd like to be able to print the name of a signal in the log when I receive it, however I cannot find a map from signal numbers to names in Python, i.e. import signal def signal_handler(signum, frame): logging.debug("Received signal (%s)" % sig_names[signum]) signal.signal(signal.SIGINT, signal_handler) For some dictionary sig_names, so when the process receives SIGINT it prints: Received signal (SIGINT) Thank you.

    Read the article

  • Which web containers install themselves well as a Windows service?

    - by Thorbjørn Ravn Andersen
    We have had a web application product for several years, and used Tomcat to deploy it under Windows as it registers itself as a Windows service so it starts and stops automatically. We may now happen to need more JEE facilitites than is provided by Tomcat (we are very tempted by the JEE 6 things in the container) so the question is which Open Source JEE containers works well as Windows services. Since Glassfish is the only JEE 6 implementation right now, it would be nice if it works well, but I'd like to hear experiences and not just what I can read from brochures. If not, what else do people use? EDIT: This goes for web containers too, and not just JEE containers. We will probably keep the necessary stack included until we find the right container and it gets JEE6 support. EDIT: I want this to work as distributed. I'm not interested in manually hacking wrappers etc., but want the installation process to handle the creation and removal of the service.

    Read the article

  • Suggestions on bug lifecycle and release management

    - by Andrew Edgecombe
    Our group is currently analysing our procedures for managing formal software releases and integrating with a bug lifecycle. What bug lifecycle model do you use? And why? For example assume a that formal releases are generated for QA once per week. At what point do you mark bugs as resolved? When the developer has committed their changes? When the changes have been reviewed and merged into the release branch? When the formal release candidate has been created? And what process, or features of your bug tracking sofware, do you use for tracking this? Are there any tips/suggestions/recommendations that you can share?

    Read the article

  • OpenCV's cvKMeans2 - chosing clusters

    - by Goffrey
    Hi, I'm using cvKMeans2 for clustering, but I'm not sure, how it works in general - the part of choosing clusters. I thought that it set the first positions of clusters from given samples. So it means that in the end of clustering process would every cluster has at least one sample - in the output array of cluster labels will be full range of numbers (for 100 clusters - numbers 0 to 99). But as I checked output labels, I realised that some labels weren't used at all and only some were used. So, does anyone know, how it works? Or how should I use the parameters of cvKMeans2 to do what I want (cause I'm not sure if I use them right)? I'm using cvKMeans2 function also with optional parameters: cvKMeans2(descriptorMat, N_CLUSTERS, clusterLabels, cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 1, 0, 0, clusterCenters, 0) Thanks for any advices!

    Read the article

  • scrapy - python question

    - by tom smith
    Hi.. Maybe not the correct place to post. But, I'm going to try anyway! I've got a couple of test python parsing scripts that I created. They work enough for me to test what I'm working on. However, I recently came across the python framework, Scrapy, which is used for web scraping. My app runs in a distributed process, across a testbed of multiple servers. I'm trying to understand scrapy, to see if it provides benefits over what I'm doing. So, if possible, I'd really like to talk with a few people who are grounded in/or who use scrapy. Thanks -tom [email protected]

    Read the article

  • WCF Windows Service - Long operations/Callback to calling module

    - by A9S6
    I have a Windows Service that takes the name of a bunch of files and do operations on them (zip/unzip, updating db etc). The operations can take time depending on size and number of files sent to the service. (1) The module that is sending a request to this service waits until the files are processed. I want to know if there is a way to provide a callback in the service that will notify the calling module when it is finished processing the files. Please note that multiple modules can call the service at a time to process files so the service will need to provide some kind of a TaskId I guess. (2) If a service method is called and is running and another call is made to the same service, then how will that call be processed(I think there is only one thread asociated with the service). I have seen that when the service is taking time in processing a method, the threads associated with the service begin to increase.

    Read the article

  • URL on apache server does not default to the .php file after / has been added

    - by jeffkee
    Generally a url that looks like this: http://www.domain.com/product.php/12/ will open up product.php and serve the /12/ as request parameters, which then my PHP script can process to pull out the right product info. However when I migrated this whole site, after developing it, to a new server, I get a 404 error, because on that server it's not defaulting to the mother directory/file in case of an absence of requested directories. I vaguely remember learning that this is generally a common apache function but I can't seem to recall how to set it up or how to manipulate it.. if there's an .htaccess method to achieve this that would be great.

    Read the article

  • Should I use Mutex OR Critical Section for Windows Mobile RIL

    - by afriza
    Hi, I am using a Radio Layer Interface (RIL) Native API in Windows Mobile application. In this API, the return values / results of most functions are not returned immediately but are passed through a callback function which is passed to the RIL API. Some usage examples are found at XDA Develompent Tools and Google Gears Geolocation API. My question is, in these two examples, a mutex is used to guard the data instead of other synchronization objects. Now, will Critical Section do fine here in the use cases described by both examples? Which thread or process will actually call the callback functions?

    Read the article

  • Why isn't my assets folder being installed on emulator?

    - by Brad Hein
    Where are my assets being installed to? I utilize an assets folder in my new app. I have two files in the folder. When I install my app on the emulator, I cannot access my assets, and furthermore I cannot see them on the emulator filesystem. Extracted my apk and confirmed the assets folder exists: $ ls -ltr assets/ total 16 -rw-rw-r--. 1 brad brad 1050 2010-05-20 00:33 schema-DashDB.sql -rw-rw-r--. 1 brad brad 9216 2010-05-20 00:33 dash.db On the emulator, no assets folder: # pwd /data/data/com.gtosoft.dash # ls -l drwxr-xr-x system system 2010-05-20 00:46 lib # I just want to package a pre-built database with my app and then open it to obtain data when needed. Just tried it on my Moto Droid, unable to access/open the DB, just like the emulator: DBFile=/data/data/com.gtosoft.dash/assets/dash.db Building the DB on the fly from a schema file is out of the question because its such a slow process (about 5-10 statements per second is all I get for throughput).

    Read the article

  • What causes a JRE 6 JVM code cache leak?

    - by Arturo Knight
    Since switching to JRE 6, my server's code cache usage (non-heap) keeps growing indefinitely. My application creates a lot of classes at runtime, BUT these classes are successfully unloaded during the GC process. I can see these classes getting unloaded in the gc logs and also the permGen usage stays constant. I specifically make sure in my code that these classes are orphaned once I am finished with them and so they correctly get garbage collected from permGen. The code cache however keeps growing. I only became aware of the code cache after switching to JRE 6. So I guess my questions are: Does GC include the code cache? What could cause a code cache memory leak, specifically. Is there a bug in JDK 6 in this area?

    Read the article

  • Scheme implementations - what does it mean?

    - by JDelage
    Hi, I'm a beginning student in CS, and my classes are mostly in Java. I'm currently going through "Little Schemer" as a self study, and in the process of finding out how to do that I have found numerous references to "implementations" of Scheme. My question is, what are implementations? Are they sub-dialects of Scheme, or is that something else (DrScheme seem to allow for different "flavors" of the language)? Is it just the name given to any given ecosystem incorporating an IDE, interpreter, interactive tool and the like? Do all other languages (e.g., Java) also have a variety of "implementations", or is it something reserved to "open" languages? Thank you, Joss Delage

    Read the article

  • Calculating determinant by hand

    - by ldigas
    Okey, this is only half programming, but let's see how you are on terms with manual calculations. I believe many of you did this on your university's while giving "linear systems" ... the problem is it's been so long I can't remember how to do it any more. I know quite a few algorithms for calculating determinants, and they all work fine ... for large systems, where one would never try to do it manually. Unfortunatelly, I'm soon going on an exam, where I do have to calculate it manually, up to the system of 5. So, I have a K(omega) matrix that looks like this: [2-(omega^2)*c -4 2 0 0] [-2 5-(omega^2)*c -4 1 0] [1 -4 6-(omega^2)*c -4 1] [0 1 -4 5-(omega^2)*c -2] [0 0 2 -4 2-(omega^2)*c] and I need all the omegas which satisfy the det[K(omega)]=0 criteria. What would be a good way to calculate it so it can be repeated in a manual process ?

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

  • Is there a format or service for resume/CV data?

    - by Ben Dauphinee
    I have noticed through the process of signing up for various freelance and job seeking or professional network sites that they all want your resume/CV data. And I am really getting tired of copy/pasting this data, especially since I have a website. Is there a standard format or service somewhere that I do not know about for this data? If not, does anyone want to help me build something like this out? I'm thinking a service similar to OpenID that allows you to maintain a central resume to have your data pulled from. No more filling in the same data over and over, and having to maintain the copies on any of the plethora of websites that have that data. Takers?

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • Service broker with SqlNotificationRequest

    - by user171523
    I am in the process of evaluating a Service Broker with SQL noticiation for my project. My requirements is User places a order from System A and it will update Order Table. As soon as order is place i need to notify the System B. I have done a quick POC with Trigger , Service Broker and SQLNotificaiton ADO.NET. It is working as i expected. What i would like to know the group A) What are the best practices i need to follow for this? B) What are disadvantages with the above approach if any? C) Are there any disadvantes of using the Triggers? If so what are those for above approach? The order table will get order from System A like 1000 to 1500 every day. I also would like to know the performance of above approach.

    Read the article

  • Stop IIS 7 Application Pool from build script

    - by Andrew Hanson
    How can I stop and then restart an IIS 7 application pool from an MSBuild script running inside TeamCity. I want to deploy our nightly builds to an IIS server for out testers to view. I have tried using appcmd like so: appcmd stop apppool /apppool.name:MYAPP-POOL ... but I have run into elevation issues in Windows 2008 that so far have stopped me from being able to run that command from my TeamCity build process because Windows 2008 requires elevation in order to run appcmd. If I do not stop the application pool before I copy my files to the web server my MSBuild script is unable to copy the files to the server. Has anybody else seen and solved this issue when deploying web sites to IIS from TeamCity?

    Read the article

  • .Net MEF newbie question

    - by steve.macdonald
    I am missing something basic when it comes to using MEF. I got it working using samples and a simple console app where everything is in the same assembly. Then I put some imports and exports in a separate project which contains various entities. I want to use these entities in an MS Test, but the composition is never actually done. When I move the composition stuff into the constructor of an entity in question it works, but that's obviously wrong. Does GetExecutingAssembly only "see" the test process? What am I missing re containers? I tried putting the container in a Using in the test without luck. The MEF docs are still very scant and I can't find a simple example of an application (or MS Test) which uses entities from a different project...

    Read the article

  • Rails application settings?

    - by Danny McClelland
    Hi Everyone, I am working on a Rails application that has user authentication which provides an administrators account. Within the administrators account I have made a page for sitewide settings. I was wondering what the norm is for creating these settings. Say for example I would like one of the settings to be to change the name of the application name, or change a colour of the header. What I am looking for is for someone to explain the basic process/method - not necessarily specific code - although that would be great! Thanks, Danny

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >