Search Results

Search found 23311 results on 933 pages for 'volume shadow service'.

Page 17/933 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How do I run a non-service program as a service on Windows 2008 Server?

    - by Lasse V. Karlsen
    I found this page that tells me how to set up Windows Live Sync as a background service on Windows 2003 Server, unfortunately the resource kit tools for 2003 that are mentioned does not work on 2008 server. http://mswhs.freeforums.org/windows-live-sync-as-a-service-on-whs-t623.html Also, apparently there is no resource kit tools downloadable for Windows 2008 Server that I can find. Perhaps someone has a link to the relevant tools? (INSTSRV.EXE and SRVANY.EXE.) Using just plain SC.EXE doesn't work as I assume that the program is then required to be a normal service, and not just any executable. What other options do I have? Can I use the task scheduler on 2008 server to start the WindowsLiveSync executable, will that work? I need the executable to stay running even after I've logged off from the server.

    Read the article

  • How to start/stop service with Apache2 on Ubuntu

    - by user142512
    Using Apache, I'd like to be able to start and stop a service on the same server. Essentially, I'm looking for a way to allow Apache (or some script called by Apache) to call sudo service XXXX start. I realize there are severe security implications with this, and I'm looking to minimize the possible effects. There is only a single service that I need to do this for. I've seen some solutions that involve "hacking" the setuid (C/Perl wrapper), others involved editing the /etc/sudoers file. Is there a better way? many thanks, S.

    Read the article

  • Installing GitBlit GO as Service in Ubuntu Server 14.04

    - by Luis Masuelli
    I downloaded it (version 1.6.0), unpacked it in /opt/gitblit (ubuntu server 14.04.1), configured http to 8280 and disabled https assigning 0 (I expose it by https using nginx). I created gitblit user and added it to 'sudo' group by running: sudo adduser gitblit sudo (gitblit user has a strong password). I installed it as a service by running: /opt/gitblit/install-service-ubuntu.sh. I tried to start it by running: sudo service gitblit start. The message Starting gitblit server appears. It's the only message. When I hit -in the same local machine- http://127.0.0.1:8280, the connection could not be made. When I run sudo netstat -anp | grep 8280, nothing appears. I see no error messages, but the server is not starting. Question: What am I missing?

    Read the article

  • Trace Mobile Service Serving 20,000 + Request Per Month

    - by Gopinath
    We introduced Trace Mobile Service in April 2010 and we are glad to announce that now the service is processing 20000 + per month. After a long time today I looked at the statistics and overwhelmed to see the number of trace requests processing by the service as 24282, 23781 and 18475 in the months of January 11, December 10 and November 10 respectively. Also I’m glad to announce that this service is contributes close to 10% of our revenues. Here is a table that provide stats for the past 7 months For those who don’t know about this service It is a tiny, yet very useful service for tracing information of Indian mobile phones. Usage of this service is very simple: enter any Indian mobile phone number and it will instantaneously let you know the location and the service provider of the mobile phone. Visit Trace Mobile Service or read Introducing “Trace Mobile Information” Service for more details This article titled,Trace Mobile Service Serving 20,000 + Request Per Month, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • CloudFormation - How to start a Windows Service with cfn-init

    - by Edwin
    I'm creating a CloudFormation Stack that will install and start a service on a Windows Instance. I've figured out how to install the service, but how do I start the service using cfn-init? The examples seem to all use linux, as there is a reference to "sysvinit" How do I structure AWS::CloudFormation::Init so that cfn-init will start windows services after installing them? Do I leave in the sysvinit, replace it with something else, take it out? ps: I'm referring to how to start services by providing information to AWS::CloudFormation::Init.services. Also, It would be nice to know how "packages" work for windows. AWS's announcemnet says that packages are supported on Windows but there is no Windows specific documentation

    Read the article

  • jQuery CSS Property Monitoring Plug-in updated

    - by Rick Strahl
    A few weeks back I had talked about the need to watch properties of an object and be able to take action when certain values changed. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects. One example is a drop shadow - if I add a shadow behavior to an object I want the shadow to be pinned to that object so when that object moves I also want the shadow to move with it, or when the panel is hidden the shadow should hide with it - automatically without having to explicitly hook up monitoring code to the panel. For example, in my shadow plug-in I can now do something like this (where el is the element that has the shadow attached and sh is the shadow): if (!exists) // if shadow was created el.watch("left,top,width,height,display", function() { if (el.is(":visible")) $(this).shadow(opt); // redraw else sh.hide(); }, 100, "_shadowMove"); The code now monitors several properties and if any of them change the provided function is called. So when the target object is moved or hidden or resized the watcher function is called and the shadow can be redrawn or hidden in the case of visibility going away. So if you run any of the following code: $("#box") .shadow() .draggable({ handle: ".blockheader" }); // drag around the box - shadow should follow // hide the box - shadow should disappear with box setTimeout(function() { $("#box").hide(); }, 4000); // show the box - shadow should come back too setTimeout(function() { $("#box").show(); }, 8000); This can be very handy functionality when you're dealing with objects or operations that you need to track generically and there are no native events for them. For example, with a generic shadow object that attaches itself to any another element there's no way that I know of to track whether the object has been moved or hidden either via some UI operation (like dragging) or via code. While some UI operations like jQuery.ui.draggable would allow events to fire when the mouse is moved nothing of the sort exists if you modify locations in code. Even tracking the object in drag mode this is hardly generic behavior - a generic shadow implementation can't know when dragging is hooked up. So the watcher provides an alternative that basically gives an Observer like pattern that notifies you when something you're interested in changes. In the watcher hookup code (in the shadow() plugin) above  a check is made if the object is visible and if it is the shadow is redrawn. Otherwise the shadow is hidden. The first parameter is a list of CSS properties to be monitored followed by the function that is called. The function called receives this as the element that's been changed and receives two parameters: The array of watched objects with their current values, plus an index to the object that caused the change function to fire. How does it work When I wrote it about this last time I started out with a simple timer that would poll for changes at a fixed interval with setInterval(). A few folks commented that there are is a DOM API - DOMAttrmodified in Mozilla and propertychange in IE that allow notification whenever any property changes which is much more efficient and smooth than the setInterval approach I used previously. On browser that support these events (FireFox and IE basically - WebKit has the DOMAttrModified event but it doesn't appear to work) the shadow effect is instant - no 'drag behind' of the shadow. Running on a browser that doesn't support still uses setInterval() and the shadow movement is slightly delayed which looks sloppy. There are a few additional changes to this code - it also supports monitoring multiple CSS properties now so a single object can monitor a host of CSS properties rather than one object per property which is easier to work with. For display purposes position, bounds and visibility will be common properties that are to be watched. Here's what the new version looks like: $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 200; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var itId = null; var data = { id: id, props: props.split(","), func: func, vals: [props.split(",").length], fnc: fnc, origProps: props, interval: interval }; $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data.fnc); }); function hookChange(el$, id, fnc) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, fnc); else itId = setInterval(fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w.fnc); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var fnc = el.data(id).fnc; try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, fnc); else clearInterval(id); } // ignore if element was already unbound catch (e) { } }); return this; } There are basically two jQuery functions - watch and unwatch. jQuery.fn.watch(props,func,interval,id) Starts watching an element for changes in the properties specified. props The CSS properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func (watchData,index) The function fired in response to a changed property. Receives this as the element changed and object that represents the watched properties and their respective values. The first parameter is passed in this structure:    { id: itId, props: [], func: func, vals: [] }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property value that has changed. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. jQuery.fn.unwatch(id) Unhooks watching of the element by disconnecting the event handlers. id Optional watcher id that was specified in the call to watch. This value can be omitted to use the default value of _watcher. You can also grab the latest version of the  code for this plug-in as well as the shadow in the full library at: http://www.west-wind.com:8080/svn/jquery/trunk/jQueryControls/Resources/ww.jquery.js watcher has no other dependencies although it lives in this larger library. The shadow plug-in depends on watcher.© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • SOA, Cloud + Service Technology Symposium Call for papers is OPEN

    - by JuergenKress
    The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops – all with an emphasis on realizing modern service technologies and practices in the real world. Call for papers The 5th International SOA, Cloud + Service Technology Symposium brings together lessons learned and emerging topics from SOA, cloud computing and service technology projects, practitioners and experts. The two-day conference will be organized into the following primary tracks: Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualisation Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Please submit your paper no later than July 15, 2012. SOA Partner Community For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Symposium,Thomas Erl,Call for papers,SOA Suite,Oracle,OTN,SOA Partner Community,Jürgen Kress,SOA,Cloud + Service Technology Symposium

    Read the article

  • One-Way Backup Service? [closed]

    - by Jon Rodriguez
    Up until a month ago, my girlfriend has used MobileMe to backup all the files on her MacBook. This turned out terribly when a quirk of MobileMe caused it to erase all of her files on MobileMe, and then sync the newly-erased MobileMe down to her computer, erasing everything. A week's worth of college essays and CS homework were gone. Now, I am terrified of any commercial cloud-backup solutions because of the possibility of this happening. Going off the list provided in these answers, could you please help me find a good backup service that is completely one-way? I want a service where there is literally not a single line of code that has the possibility of writing to my computer's drive. I want a pure one-way backup service.

    Read the article

  • Change Windows Service Priority

    - by SchlaWiener
    I have a windows service that needs to run with High Priority. At the end of the day I want to use this script to modify the priority after service startup: Const HIGH = 256 strComputer = "." strProcess = "BntCapi2.exe" Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colProcesses = objWMIService.ExecQuery _ ("Select * from Win32_Process Where Name = '" & strProcess & "'") For Each objProcess in colProcesses objProcess.SetPriority(HIGH) Next But currently I am not able to change the priority, even with the taskmanger. The taskmananger throws an "Access Denied" error, but I am logged on as administrator and I changed the user account of the service to administrator, too. I still get the "access denied" message when trying to change the priority. Any ideas what permission I need to do that?

    Read the article

  • System File Checker vs Service Pack Reinstall

    - by Nixphoe
    When trying to repair slow workstations, I've found that running sfc /scannow helps quite a lot in a few of my environments running really old computers. I've also seen recommendations of reinstalling the last service pack after software installation to help keep the system stable. That makes sense as it would replace a lot of the dll files with the ones that would come with the service pack. They both seem to do the same thing, but SFC some times will ask for a disk, where the Service Pack will not. What is the main difference between the two?

    Read the article

  • Can't create new Volume on Unallocated Space

    - by natediggs
    I installed Windows Server 2008 R2 on a Dell server that has one volume that is a 6 TB RAID 5 array. I created a 120GB install volume and I'm now trying to create a 5 TB data volume. For what ever reason Windows will not allow me to create a new volume out of all of the unalocated space. Windows will allow me to create a new volume out of one 2TB block of unallocated space but not the remaining 3.5 TB block. Tried to post a screen shot but I was blocked. If I right click on the 1949.85 GB block of space there is the option to create a new volume. If I click on the 3539.5 GB block of space that option is grayed out. If I go into diskpart and try to create a new partition, diskpart says that there is only 1949GBs free on the volume. I know this process works because I did the exact same thing on another server that we have that is the exact same hardware configuration on which I used the exact same Server 2008 R2 install image. Any help would be greatly appreciated. Nate

    Read the article

  • WiX - Modifying an existing service to be dependent on the service I am installing

    - by Paul Nearney
    Hi all, Using Wix3, its trivial to ensure that a windows service being installed is given a dependency on a service that is already installed on the target machine, but I need to do the opposite - i.e. as part of my install I need to modify the service dependencies of an existing service (i.e. already installed on the target machine), to ensure that that service is dependent on the service I am installing. Is there a simple way to do this using WiX? or will I need to write a custom action? Many thanks, Paul

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • Creating block devices for openstack deployment using MAAS and juju (nova-volume deployment)

    - by Tom Van Hoof
    Hi, I'm currently trying to get a openstack deployment working by using MAAS with 9 nodes and juju. To do this I found this guide, working with ubuntu 12.04 LTS https://help.ubuntu.com/community/UbuntuCloudInfrastructure and followed it as good as I can. After a vigorous amount of trial and error I finally got to the point where I'm supposed to deploy nova-volume using the "custom" config file. However when my node is started and shows up as running in the "juju status" report the service reports the installation failed. I'm trying to install with juju jitsu by the way. I think it has something to do with the following statement in the openstack.cfg file : nova-volume: # This must be a free block device that is writable on the nova-volume host. block-device: "xvdb" overwrite: "true" I did some research and found that (at least I think) this refers to a Xen Virtual Drive/device, and because the device is not present on the node it's being deployed to, the installation fails. What I don't understand is how I am supposed to have such a block device available on a machine which was completely managed by MAAS. Does anyone here have any experience with this and knows of a way to solve this, or am I missing something big here. Some kind of missing link between the MAAS and a separate XEN host ? My MAAS server is ubuntu 12.04LTS Server. All help is welcome. Kind regards, Tom

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Installed Apache. Bash: 'service httpd status' does nothing,

    - by Josh
    I just installed Apache 2 on CentOS5 from source (httpd-2.2.15.tar.gz) using: ./configure --prefix=/usr/local/apache make make install /usr/local/apache/bin/apachectl start I have verified that httpd is running in ps, and verified it is serving the default htdocs page. However, Apache is not found in 'service --status-all' and is not found in '/etc/init.d', so I cannot run 'service httpd status' or '/etc/init.d/httpd start', and other commands. Any ideas what I am missing?

    Read the article

  • Running a Web Service

    - by Brandon
    I have a web service created in .NET. I thought I had setup everything correctly based on a set of instructions I followed but for some reason I'm getting: "Not Found" when I try to load in my browser http://localhost/myservice/webservice.aspx. Someone said that I have to configure IIS for Aspx files. I don't know how to do that. I need to know how to do that and I need help on setting up this web service

    Read the article

  • Problems creating service using sc.exe

    - by Shoko
    I have this command to create a service: sc create svnserve binpath="\"C:\Program Files (x86)\Subversion\bin\svnserve.exe\" --service --root C:\SVNRoot" displayname="Subversion" depend=tcpip start=auto obj="NT AUTHORITY\LocalService" Unfortunately, it seems not to work, even though the syntax is correct. When I run it, I get the usage instructions (which I guess is a way of telling me that I've supplied incorrect arguments, although I have no idea what incorrect argument I might have supplied). Can anyone help me out of my difficulty? Thanks!

    Read the article

  • Add Clamd as a service to CentOS?

    - by Josh
    As I understand I think I need to add something to init.d, but I am not sure what to add. At the moment to start clamav I have to do clamd start. I would like it as a service so I can start it on run level 3 as a service. I realize I could probably do this through a shell script in the right runlevel, but I would like to be able to use chkconfig to configure it.

    Read the article

  • Server Vs Service / Physical Vs Virtual

    - by user559142
    When reading definitions for a server service (e.g. iis) you will often find that there are several cross references to a virtual server but none seem to definitively refer to the two as the same..... Can somebody help me to understand the differences - I cannot get my head around what the difference between each is? Ideally I would like to know the differences between the following/or indeed if any refer to the same... 1) Logical Server 2) Virtual Host 3) Logical Partition 4) Physical Server Vs Virtual Server 5) Server Service Vs Virtual Host

    Read the article

  • Enterprise Service Bus, .NET Service Bus, NServiceBus and the wheels on the bus...

    - by Chris Marisic
    Enterprise Service Bus (ESB), .NET Service Bus, NServiceBus, RhinoServiceBus, MassTransit and so on. I'm trying to understand what each of these technologies have in common or not in common. I attended Juval Löwy's presentation on the .NET Service Bus earlier today and he stated that the .NET Service Bus could be used as a poor man's version of an ESB, so I would take that to mean that the .NET Service Bus is NOT an ESB, are any of the others a true ESB? If any of the others are a true ESB what would make them a true ESB as opposed to the .NET Service Bus?

    Read the article

  • Howto - Running Redmine on mongrel as a service on windows

    - by Achilles
    I use Redmine on Mongrel as a project manager and I use a batch file (start-redmine.bat) to start the redmine in mongrel. There are 2 issues with my setup: 1. I have a running IIS on the server that occupies the HTTP port (80) 2. The start-redmine.bat must be periodically checked to see if it's stopped after a restart that is caused by windows update service. for the first issue, I have no choice but running mongrel on a port like 3000 and for the second issue I have to create a windows service that runs automatically in the background when the windows starts; and here comes the trouble! There are at least 3 ways to run redmine as a service that I'm aware of; none of them can satisfy a performance view on this subject. you may read about them on http://stackoverflow.com/questions/877943/how-to-configure-a-rails-app-redmine-to-run-as-a-service-on-windows I tried them all. The easiest way to setup such a service is using mongrel_service approach; in 3 lines of command you're done. but the performance is significantly lower than running that batch file... Now, I wanna show you my approach: First suppose we have ruby installed into c:\ruby and we have issued the command gem install mongrel to get the mongrel gem installed into c:\ruby\bin Also, suppose we have installed the Redmine into a folder like c:\redmine; and we have ruby's path (i.e. c:\ruby\bin) in our PATH environment variable. Now Download and install the Windows NT Resource Kit Tools from microsoft website. Open the command-line tool that comes with the Resource Kit (from start menu). Use instsrv to install a dummy service called Redmine using the following command: "[path-to-instsrv.exe]\instsrv" Redmine "[path-to-srvany.exe]\srvany.exe" in my case (which is the default case) it was something like this: "C:\Program Files\Windows Resource Kits\Tools\instsrv" Redmine "C:\Program Files\Windows Resource Kits\Tools\srvany.exe" Now create the batch file. Open a notepad and paste these instructions into it and then save it as "c:\redmine\start-redmine.bat" @echo off cd c:\redmine\ mongrel_rails start -a 0.0.0.0 -p 3000 -e production Now we need to configure that dummy service we had created before. WATCH OUT WHAT YOU'RE DOING FROM HERE ON, OR YOU MAY CORRUPT YOUR WINDOWS. To configure that service, open windows registry editor (Start - Run - regedit) and navigate to this node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Redmine Right-Click on "Redmine" node and using the context menu, create a new key called Parameters (New - Key) Right-Click on "Parameters" and create a String Value property called Application. Do this again and create another String Value called AppParameters. Now Double-click on "Application" and put cmd.exe into "Value data" section. Then Double-click on "AppParameters" and put /C "C:\redmine\start-redmine.bat" into Value data section. We're done! issue this command to run the redmine on mongrel as a service: net start Redmine

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >