Search Results

Search found 5298 results on 212 pages for 'automated deploy'.

Page 50/212 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Executed PHP files are stale unitl "touched" (Symlinked NFS mount as web root)

    - by mmattax
    We have a PHP application that has 3 web servers (running Nginx and Apache). The web server's directory root are symlinked directories that point to an NFS mount. For example: web01 has an NFS mount at /data/webapp, which is symlinked to /home/webapp. Apache serves content from /home/webapp/www. We also use ACP for our PHP opcode cache. When we deploy code, we SCP an archive file to the NFS server and extract it. Since upgrading RedHat 6, when we deploy our code the webserver execute "stale" PHP files until touch is run on the PHP files. We thought that APC might be causing a problem, but the issue exists, even after clearing the opcode cache. Any ideas on how to diagnose why the stale PHP code is being executed?

    Read the article

  • What configuration management solutions exist in a non-networked environment?

    - by Rob Spieldenner
    My servers exist in an environment without outside network connectivity (this is a requirement), so when I deploy updates all packages, binaries, config files, etc. must be included on the delivered media. And of course I want some sort of configuration management so I can tell what has and hasn't been installed. So I was wondering if people had experience with chef, puppet, or another configuration management type tool for dealing with this type of environment. Worst case I deploy my updates as an RPM. EDIT: My setup has both Linux servers and Windows servers.

    Read the article

  • Deploying an application on a windows domain

    - by ALOToverflow
    I'm looking for different ways to deploy, execute and uninstall an application on all machines of a Windows domain. I've did some research on Group Policy Object (GPO) but I'm still looking for other ideas. As I said, I need to deploy the application, run it without the user having to click anything and letting him to control over the machine. Once it's finished running I need to uninstall it and never run it again. Can such things be done with a GPO? Are there any other possibilites on a Windows domain? Thank you

    Read the article

  • How do I perform an action if the upstart respawn limit is hit?

    - by Daniel Huckstep
    I have an upstart job: description "foreman" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 3 60 chdir /home/deploy/app/current env RAILS_ENV=production exec sudo -u deploy bundle exec foreman start We ran into a case where a rogue character in an app file caused one of the background workers to fail but the app ran normally (weird). The app worked fine, but the workers were never working. I'd like upstart to do something (send an email) if it can't start this job, since it's not entirely obvious if everything went alright. Is there something built into upstart to handle this, or do I have to get creative?

    Read the article

  • How can I get JavaDoc into a JunitReport?

    - by benklaasen
    Hi - I'm a tester, with some Java and plenty of bash coding experience. My team is building an automated functional test harness using JUnit 4 and ant. Testers write automated tests in Java and use JavaDoc to document these tests. We're using ant's JunitReport task to generate our test result reports. This works superbly for reporting. What we're missing, however, is a way to combine those JavaDoc free-text descriptions of what the test does along with the JunitReport results. My question is, what's involved to get the JavaDoc into the JunitReport output? I'd like to be able to inject the JavaDoc for a given test method into the JunitReport at the level of each method result. regards Ben

    Read the article

  • Options for PCI-DSS on AWS - file integrity monitoring and intrusion detection

    - by Brill Pappin
    I need to deploy some file integrity monitoring and intrusion detections software on AWS instances. I really wanted to use OSSEC, however it does not work well in an environment where servers can auto deploy and shut down based on load, because it requires server managed keys to be generated. Including the agent in the AMI will not allow monitoring as soon as it comes up because of that. There are many options out there, and several are listed in other posts on this site, however none that I've seen so far deal with the unique problems inherent in AWS or cloud based deployments in general. Can anyone point me at some products, preferably open source, that we might use to cover those portions of PCI DSS that require this software? Has anyone else achieved this on AWS?

    Read the article

  • How to add a shutdown script (not by using gpedit.msc or active directory)?

    - by Francis
    I have created a script I want to deploy on my XP workstations as a shutdown script. I know I can add my script as a shutdown script with the UI (gpedit.msc), but I want to automate the deployment of my script. My workstations are not part of a Windows domain. I will deploy with OCS Inventory. I tried to add entries to the Windows registry, but this doesn't work. I don't see what I added when I run gpedit.msc. If I add something with gpedit.msc, this seem to overwrite what I added manually into the registry.

    Read the article

  • variables in batch scripts [closed]

    - by richzilla
    I'm trying to set up a batch file to automatically deploy a php app to a web server. Basically, what I want is an entirely automated process: I would just give it a revision number from the repository and it would then export the files, upload via ftp and then update deployment info at the repo host (codebase). However, I'm starting from scratch here. How would I set up a batch file to accept a variable when it was run? For example, the command myfile.bat /revision 42 should deploy revision 42 to my server. If anyone can point me in the right direction I'd appreciate it.

    Read the article

  • How IBM Implement WebSphere Application Server SDK for Sun Solaris OS

    - by Eng Al-Rawabdeh
    I deploy the same application in IBM-WAS on different OS ( Windows , AIX and SUN-Solaris ) , SDK errors appeared on SDK for just Solaris OS , I refer some sites and it talk that the SDK on Solaris OS was build based on Sun SDK is it write ? so please I need to now if the IBM build the Solaris SDK from scratch or based on sun SDK ?? More Details : I Installed the same IBM WAS Application Server on two servers as the following : 1- Server1 - OS (AIX) 2- Server2 - OS ( Solaris) these two server on the same network and have the same configuration . Then I deploy Java Application ( X ) on both servers , the Application X was run on Server1 ( AIX ) without any problem but when I run the Application on Server 2 ( Solaris OS) I faced SDK issue . So I need to know what the difference between AIX WAS SDK and Solaris WAS SDK ?? Note : I try windows and it was run without any problem .

    Read the article

  • SharePoint 2010 deployment problem after added a new server to existing farm

    - by mrt
    I have SharePoint 2010 farm with one server. I'm developing some features in a sharepoint farm solution (not sandbox because there are some user rights problem). All feature scopes are set to "Site". I can deploy the solution to SharePoint with no problem. I added a new web front-end server to my existing farm. Then when I try deploy my solution, VS2010 shows this error: Error occurred in deployment step 'Activate Features': Feature with Id 'xxx' is not installed in this farm, and cannot be added to this scope I login with AD administrator account to development server. Administrator account is in site collection admins on the target web application. The farm account is in local administrators group. Is there a solution for this error?

    Read the article

  • deploying AV via GPO only to workstations

    - by jeremy
    We have a small (100 machines) Windows domain running Server 2008R2. We use Symantec Endpoint Protection 12.1 I want to have GPO deploy the AV software to client machines automatically, but only to client workstations, not to servers, which run a different software. I've set it up before using a GPO linked to the domain mycompany.local and it works, but it deploys the AV software to ALL machines on the domain, including my servers. I can create an OU in active directory for Servers, and perhaps create one for client machines too, but I'd rather not have to go and move new domain members from the default under Computers into a different folder. How can I use GPO to deploy this AV software only to workstations on our network, and not to servers?

    Read the article

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

  • Active Directory - Using GPO To Update Multiple Versions Of .NET

    - by Joe Wilson
    OK, I have searched everywhere for this one. I have all the MSI's and packages I need to deploy .Net 3.5 SP1, and 2.0 and 3.0 (which are prerequisites for 3.5). I can't figure out how to install all of them at once via GPO. Basically, the computers on the network do NOT have any version of .Net installed, and I need them to be at 3.5 SP1. I know I can deploy each version via GPO, force reboot the client, then push the next one, force reboot, and so on. Is there a way to streamline install all 3 at once via GPO? Thanks

    Read the article

  • Ant database rebuild script, avoiding interactive prompting

    - by fras85
    Hi Guys. I'm writing an ant script to rebuild our database i.e. dropping everything and rebuilding from scratch. The problem our DBA adds a Y/N prompt before executing the rest of the script, and therefore we can't call this from an automated build process. Does anyone have any suggestions to circumvent the Y/N prompt? Obviously we could create seperate scripts, one for the DBA's and one for the automated build - but this requires maintaning both. We're running on Windows so it's not as easy as using sed to strip out the prompt...but i'm thinking something along those lines. Not sure if that's clear enough but hope you can help. Cheers.

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • What applications is NTFS preferable for? [closed]

    - by javano
    When building a new server I prefer to deploy Linux as my OS of choice. This gives me the luxury of being able to choose from various file systems (amongst other aspects), and I will choose a different FS for different servers, depending on what they will be used for. With Windows OS variants you can only use NTFS. Have any benchmarks or tests been performed that have shown NTFS to be a preferable choice for a given scenario or application (apart from just "running Windows" because it has to be on NTFS). To clarify what I mean; I might use filesystem X for large transactional storage volumes, but filesystem Y for front end web app servers. If I had a multi-platform application to deploy that (let's pretend) was available on Mac/Win/Lin, is there any type of application or scenario that would benefit from being on NTFS?

    Read the article

  • Can't access my files in ASP.NET web site

    - by jumbojs
    I'm having a very difficult time. I am running windows 2008 server, I have an Able Commerce site using ASP.NET with C#. I'm writing an automated task that will ftp some xml files down into a local directory on our web server and then the program parses the xml file and saves information to our database. The problem, once I save the files to our local directory, my program has no access to the files. The NETWORK SERVICE user permissions isn't being inherited by the xml files so my program can't do anything with them. I can manually change the permissions, but this wouldn't be automated and won't work. How can I get this to work? help please, it's very frustrating.

    Read the article

  • Creating a seperate static content site for IIS7 and MVC

    - by JK01
    With reference to this serverfault blog post: A Few Speed Improvements where it talks about how static content for stackexchange is served from a separate cookieless domain... How would someone go about doing this on IIS7.5 for a ASP.NET MVC site? The plan so far: Register domain eg static.com, create a new website in IIS Manually copy the js / css / images folders from MVC as is so that they have the same paths on the new server Enable IIS gzip settings (js/css = high compression, images = none) Set caching with far future expiry dates <clientCache cacheControlCustom="public" /> in the web.config Never set any cookies on the static.com site Combine and minimize js / css Auto deploy changes in static content with WebDeploy Is this plan correct? And how can you use WebDeploy to deploy the whole web app to one server and then only the static items to another? I can see there is a similar question, but for apache: Creating a cookie-free domain to serve static content so it doesn't apply

    Read the article

  • Can't change read only folder in windows 7

    - by James Drinkard
    I'm trying to run a Spring MVC 2.5 tutorial and when I run the ant script for a deploy, I get this error: deploy: [copy] Copying 2 files to C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp BUILD FAILED C:\projects\workspace\springapp\build.xml:46: Failed to copy C:\projects\workspace\springapp\war\WEB-INF\web.xml to C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp\WEB-INF\web.xml due to failed to create the parent directory for C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp\WEB-INF\web.xml After reviewing the directory: springapp I saw the properties as read-only. No problem I thought as I'm logging in as administrator. However, changing the uac settings, going to a command prompt as admin and then trying to change the properties of the folder with attrib, making me the owner of the folder, changing the security settings etc... did nothing. I can't seem to change this folder to anything. So my question is, how do I change the settings on that folder so ANT can make changes to that folder?

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • Will Windows Update modify anything in Visual Studio?

    - by Martin
    (Note: Yes, the technical side of this question seems to be rather SuperUser, but the implications are more relevant for StackOverflow readers.) As the title says, we are wondering if (fully) enabling automated Windows Updates on our developer machines will have implications for MS Visual Studio. That is, will any fixes to any components (be it libraries, UI/IDE, compiler, ...) ever be updated through Windows Update? We want to have 100% exact and reproducible development environments (wrt C++) on all developer machines, and so we are concerned that automated Windows updates may introduce some uncontrolled updates into our development chain.

    Read the article

  • Crontab stopped unexpectedly

    - by naka
    I have following entries in the crontab: 0 0 * * * /mnt/voylla-production/releases/20131111011431/script/rubber cron --task util:rotate_logs --directory=/mnt/voylla-production/releases/20131111011431/log 0 4 * * * /mnt/voylla-production/releases/20131111011431/voylla_scripts/cj_daily.sh 0 2 * * 6 /mnt/voylla-production/releases/20131111011431/voylla_scripts/cj_saturday.sh I worked fine until today. It didn't run as scheduled after a capistrano deploy, didn't get a mail either. It worked fine earlier, and I am unable to understand what wrong. The only change that was made was the deploy, but I think it should not affect the cron. I tried using pgrep cron to see if crons is working. It gives 904 as output. Could someone please help. Thanks

    Read the article

  • VMWare Lab Manager: What's the best way to build Library Configurations?

    - by mcohen75
    We're using Lab Manager within our QA group. We use it to quickly deliver environments we need for testing. We have 25 Templates, 14 Library Configurations and counting. To build up our templates we: Create a base template that is a bare bones version of Server 2008 + basic configuration (Windows Update, Firewall exceptions) Create a linked clone for each Server template we need (SQL Server 08, 05, etc) Repeat for other OS's, like Windows 7 and Windows XP Then we create configurations: Create a workspace configuration with multiple images in it (Say Server 08 w/SQL Server and Windows 7) Deploy the configuration and make some minor configuration changes Undeploy and Capture to Library How do we keep this manageable? When I need to update a configuration, should I: Rebuild it from templates Clone it to a workspace, make changes, recapture it to the library keep the configuration in my workspace (don't delete it after capturing it to library), deploy it to make changes and then re-capture to library

    Read the article

  • Install new version of running app using MSI

    - by Uwe
    We run a x-copy deployed .net application on our Windows 2008 R2 terminal server. If I want to deploy a new version the file handle is locked by all users running the application. I wonder if I'd deploy using MSI if I could run the installation even when the .exe file is opened and locked by the users. My goal is that each user gets the new version if he/she opens the app the next time. I don't want all users to have to close the application for deployment itself.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >