Search Results

Search found 4708 results on 189 pages for 'hot deploy'.

Page 107/189 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Remove Applications page on MDT 2012 Deployment Summary

    - by Ben M.
    I am using SCCM 2012 and MDT 2012 for OSD and app deployment. Because of the nature of how some of the apps need to be detected (and which changes as apps get updated) they show up with a warning sign on the Applications tab of the Deployment Summary. There is pretty much no way to get them all green check marked (not realistically in our setup anyway) so I would like to get rid of that page so that I don't have to hear about it from my teams as they deploy. Any help would be appreciated. MDT Summary screen: Applications Installed (the tab I want to remove)

    Read the article

  • Capistrano deploying to different servers with different authentication methods

    - by marimaf
    I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS) I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this: ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")] ssh_options[:forward_agent] = true I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case. I got to this tutorial, but couldn't find what I need. I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository Please any help os welcome. Thanks a lot

    Read the article

  • Acer Aspire One (mini) mfgd 9/03 locks up also when plugged in

    - by LAURIE ANN
    My problem is almost the same as the Toshiba user (Toshiba A205-5804 freezes when plugged in): Well I have a Toshiba A205-5804 and the problem is that the screen freezes anytime I plug the pc into the external power supply, not as most of the computers having the same issue, my computer DOES freeze in safe mode, and I really can't bear this problem for much longer... It's not an overheat problem, the computer is not getting hot or anything related, I've tried already to change the AC adapter, to boot only with AC and no battery, and also all of these suggestions: The only difference in my case is: I can be using the battery and when it runs down, I can just close the lid and the system goes into hibernation mode. I then plug it in and let it charge. When I think it's finally charged, I can UNPLUG it, open the lid and all is running fine on the battery again. Note: the system was NOT shut down and it still runs as long as I remove the power plug before opening the lid. I have ALL the same issues as the other Toshiba user, also. I was a tech for 9 yrs in my own business and this one has not only stumped me, but anyone I have asked has never heard of this problem. Every repair center wants to charge me for diagnosis, even is they cannot fix it. I would really like to run this system along side of my new Acer Aspire 17" laptop as I need it to finish my grad school work. Any ideas would be GREATLY appreciated. Thanx, Laurie Ann

    Read the article

  • BES and BES Express co-existance

    - by ITGuy24
    Question: Can I deploy BES and BES Express with Exchange 2010? Any tips for doing this if possible? Background: We have a lot of users with personal BlackBerry's and we would like to allow them to start using their devices for receviing corporate email. We don't need full BES features for these users, just the ability to enforce passwords and remotely wipe. Plus users have never wanted to upgrade their data plans to the more expensive BES plans. So BES Express is a great fit. We already have an existing BES, for corporate owned BB's. We want to keep this as is allows us to enforce several policies, not available on BES Express.

    Read the article

  • Performance issue when configuring non HA VM in cluster

    - by laiys
    Hi, I saw this article http://technet.microsoft.com/en-us/library/cc764243.aspx Quote taken from the link “ Important It is recommended that you not deploy virtual machines that are not highly available on your host clusters. Although you can do this by using Hyper-V (VMM does not allow it), the non-highly available virtual machines will consume resources that otherwise would be available to the HAVMs What kind of resources (CPU,memory, NIC, etc) that non HA VM will consume? Just curious as not all VM (in production) not to be in Failover Cluster and Live Migration. If i put the VM into CSV but did not make it as HA, what impact does it make since i allocate same vCPU, vNic and VMemory into the VM. (not to mention that i lost failover feature). Curious to understand more about this. Please advise. Thanks

    Read the article

  • An efficient setup of several VPSs on one box?

    - by Abs
    Hello all, I hope its ok to ask this question on serverfault, its not an actual fault but more of an implementation advice request. I would like to have a dedicated server that I can deploy my own VPSs on. These VPS will be various windows, Mac and Linux operating systems. I was thinking of buying a large Linux based dedicated server and then running VMWare Server or Virtualbox and adding my own images on there for each OS but I am thinking this isn't going to be cost effective and easy to maintain. I am hoping someone can help me with the perfect setup that is both cost effective and efficient so that I can have 6 VPS at my disposal that I can easily control. Thanks all for any help.

    Read the article

  • Is it possible to skip .rvmrc confirmation?

    - by Viacheslav Molokov
    We are using RVM for managing Ruby installations and environments. Usually we are using this .rvmrc script: #!/bin/bash if [ ! -e '.version' ]; then VERSION=`pwd | sed 's/[a-z/-]//g'` echo $VERSION > .version rvm gemset create $VERSION fi VERSION=`cat .version` rvm use 1.9.2@$VERSION This script forces RVM to create new gem environment for each our project/version. But each time we was deploying new version RVM asks us to confirm new .rvmrc file. When we cd to this directory first time, we are getting something like: =============================================================== = NOTICE: = =============================================================== = RVM has encountered a not yet trusted .rvmrc file in the = = current working directory which may contain nasty code. = = = = Examine the contents of this file to be sure the contents = = are good before trusting it! = = = = Press 'q' to exit the reader when finished reading the file = =============================================================== (press enter to continue when ready) This is not as bad for development environments, but with auto deploy it require to manually confirm each new version on each server. Is it possible to skip this confirmation?

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • Vagrant with VMware ESXi Provider

    - by Adam
    I am attempting to work with Vagrant and the vagrant-vsphere plugin to deploy machines to my VMware ESXi server. Has anyone had any luck in getting this to work? I realize that vagrant-vsphere is still 0.0.1 though and there are bound to be bugs. Specifially, Vagrant and vagrant-vsphere appear to fail during the vsphere connection, however, SSH and CLI access is enabled and the vSphere powershell is able to connect without an issue. INFO warden: Calling action: # ERROR warden: Error occurred: VagrantPlugins::VSphere::Errors::VSphereError The hostd log file on the ESXi server shows the Vagrant doing an SearchIndex query.

    Read the article

  • Oracle EE 11.2g: how to generate fresh new redo logs

    - by Aikanaro
    Hi, In the company I work for we are heavy users of vmware machines. Almost all our projects are developed inside a virtual environment up to the point where we have to deploy them into a production system. While in development, some colleagues of mine deleted the redo log files for Oracle in the hopes of gaining some free space. Now they are unable to start the database instance. Is there a way of generating a fresh new redo log so that the instance can be started? This is urgent and even though I'm currently googling for an answer I have yet to find it. Thanks in advance.

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • What's the best way to share folder between guest and host machine in VMWARE over VPN?

    - by melaos
    i have a win 7 host machine and i'm running my vmware which is a win server machine. So i'm doing windows development work on my vmware. the source codes are in my win 7 machine which i access using a shared folder method. My only problem now is when my vmware connects to VPN to the deploy the codes, the folder gets disconnected. as i don't really understand the networking or the vmware architecture, what can i do so that i can share the folder from my win 7 host machine to my vmware without getting disconnected when i connect to VPN using my guest (win server) machine? please advise. stuck on vmware thanks

    Read the article

  • SSH keys fail for one user

    - by Eli
    I just set up a new Debian server. I disabled root SSH and password auth, so you've gotta use a key file. For my primary user, everything works exactly as expected. I used ssh-keygen -t dsa and got myself a public and private key. Put one in authorized keys, put the other in a pem file locally. I wanted to create a user that I can deploy things with, so I did basically the same process. I addusered it, made a .ssh folder, ran ssh-keygen -t dsa (I also tried RSA), put the keys in their appropriate locations. No luck. I'm getting a Permission denied (publickey) error. When I use the exact same keys as the account that works, same error. When I enable password authentication, I can log in via SSH with the password. How do I debug this?

    Read the article

  • Can't assign software to non-admin account

    - by labyrinth
    I'm trying to deploy software on our domain using group policy, but I am only able to do so if the user is a member of a group with administrative privileges. We do not want to allow users to install programs generally, but do want to be able to assign/publish. The test program I'm using is originally a .msi file, and it installs fine for users in the administrators group. How can we assign/publish to normal users without opening up the ability to install whatever? Also, from what I've read, I believe I have correct permissions on the folder/share where the .msi files are stored. This is on Win2008R2 with Win7Pro clients.

    Read the article

  • Booting off windows image through network

    - by Mr. Sir King Osman
    I have a HP st5742, which is a tower that does not have a hard drive and I am trying to boot it off the network, preferably off an image. It was designed along with the program HP Image Manager, however this program has been discontinued by HP and I can not seem to find a way to get a copy. If this helps, I am running my network with windows server 2008 R2 and would like the streaming client to be running windows. I have spent days searching for a way to deploy this machine however I can not seem to find a straight forward program, guide, or way to do this. I am new to this sort of thing but I willing to reading into the subject, all I need is a point in the right direction. Any help would be greatly appreciated.

    Read the article

  • Deploying new code live

    - by nicoX
    What's the best practise to deploy new code on a live (e-commerce) site? For now I have stopped apache for +/- 10 seconds when renaming directory public_html_new to public_html and old to public_html_old. This creates a short down-time, before I start Apache again. The same question goes if using Git to pull the new repo to the live directory. Can I pull the repo while the site is active? And how about if I need to copy a DB as well? During the tar (backup purpose) compression of the live site I noticed that changes occurred in the media directory. That indicated to me that files keep on changing periodically. And if these changes can interfere if Apache is not stopped during deployment.

    Read the article

  • Upgrade PHP to 5.3 in Ubuntu Server 8.04 with Plesk 9.5

    - by alcuadrado
    I have a dedicated server with Ubuntu 8.04, and really need to upgrade php to 5.3 version in order to deploy a new version of the system. This version of php is the default one in ubuntu 10.04, so I considered upgrading the OS, but after trying that, I lost my plesk installation, which annoyed my client. I tried adding the dotdeb.org repositories, but don't know why, after running an apt-get upgrade, I get this error: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: libapache2-mod-php5 php5 php5-cgi php5-cli php5-common php5-curl php5-gd php5-imap php5-mysql php5-sqlite php5-xsl 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. Any idea why is this happenning? Or do you know any alternative method (except compiling my own binaries) to upgrade php or update ubuntu without loosing plesk? Thanks!

    Read the article

  • Private cloud solution [Eucalyptus,OpenStack, Nimbus] for Java deployments [Glassfish, Tomcat]

    - by Tadas D.
    I am interested in a way to have private cloud which would host Glassfish (or Tomcat) server. Which option from Eucalyptus, Openstack or Nimbus would be best to deploy java applications on it? Or maybe there is something other and I am looking wrong at the problem? The way I imagine this, that I should have some shared storage that I could expand by introducing new nodes to this cluster and have easy management for glassfish instances: something like virtual machines images that I can start and stop on demand and that image is shared among nodes. I don't need concrete step-by-step solution here but guidelines how this should be done are very welcome.

    Read the article

  • Cooling Server Closet - No A/C Is Possible

    - by JamesCo
    We're moving into a new office in an old building in London (that's England :) and are walling off a 2m x 1.3m area where the router & telephone equipment currently terminates to use as a server closet. The closet will contain: 2 24-port switches 1 router 1 VSDL modem 1 Dell desktop 1 4-bay NAS 1 HP micro-server 1 UPS Miscellaneous minor telephony boxes. There is no central A/C in the office and there never will be. We can install ducting to the outside quite easily - it's only a couple of metres to the windows, which face a courtyard. My question is whether installing an extractor fan with ducting to the window should be sufficient for cooling? Would an intake fan and intake duct (from the window, too) be required? We don't want to leave a gap in the closet door as that'll let noise out into the office. If we don't have to put a portable A/C unit into the closet, that'd be perfect. The office has about 12 people; London is temperate, average maximum in August is 31 Celsius, 25 Celsius is more typical. The same equipment runs fine in our current office (same building as new office, also no A/C) but it isn't in an enclosed space. I can see us putting say one Dell 2950 tower server into the closet, but no more than that. So, sustained power consumption in the closet would currently be about 800w (I'm guessing); possibly in the future 2kw. The closet will have a ceiling and no windows and be well-insulated. We don't care if the equipment runs hot, so long as it runs and we don't hear it.

    Read the article

  • What are the benefits of running chef-server instead of chef-solo?

    - by strife25
    I am looking at automated deployment solutions for my team and have been playing with Chef for the past few days. I've been able to get a simple web app up an running from a base Red Hat VM using chef-solo. Our end goal is to use Chef (or another system) to automatically deploy application topologies to the cloud as we run builds. Our process would basically run like so: Our web app code, dependencies, and chef cookbooks are stored in SCM A build is executed and greats a single package for images to acquire and test against The build engine then deploys new cloud images that run a chef client to get packages installed. The images acquire the cookbooks from SCM or the Chef server and install everything to get up and running What are the benefits and/or use cases for getting a Chef Server running? Are there any major benefits to have a Chef Server hold and acquire the cookbooks from SCM vs. using chef-solo and having a script that will pull the cookbooks from SCM?

    Read the article

  • securing source code with bitlocker

    - by Daniel Powell
    We need to deploy a web based application at a client site where it will be within their local intranet. Part of our requirement is to provide some basic security to protect our IP. I realise that nothings a 100% guaranteed fix but we are just looking to make it a bit harder for most people. The server will be running server 2008 and I was considering using bitlocker as a cheap and nasty way to protect it. From what I understand assuming the mobo supports it we can use the Transparent bitlocker mode and this means that moving the hdd to another pc will mean the hdd will be unreadable in that machine baring some sort of cold boot attack to steal the encryption keys. Is this assumption correct and in the case that the motherboard or any other component fails in the pc and we need to replace it do we lose access to our data or is there a way to unencrypt it (obviously accessible to only our company) EDIT: we do have legal documents that cover this and we will be locking the pc physically and the client will not have access to the pc (windows login) other than via the website we host on it

    Read the article

  • route port 3000 to apache2 alias

    - by user223470
    I have a meteor application running on port 3000. I can successfully connect to the program with www.myurl.com:3000, but would rather connect to it via www.myurl.com/myappname. I started with the instructions on this web site: http://www.andrehonsberg.com/article/deploy-meteorjs-vhosts-ubuntu1204-mongodb-apache-proxy and I have the following Apache configuration file: <VirtualHost *:80> ServerName myurl.com ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ </Location> </VirtualHost> I do not know how to continue from here to get the program on www.mysite.com/myapp. In other situations, I would use an Alias within the Apache configuration file, but that doesn't seem like the right direction to go in this case. How do I configure Apache to send port 3000 to www.myurl.com/myapp?

    Read the article

  • How to migrate a running KVM (with full disk copy) to another node?

    - by klipz
    I'm doing tests on KVM, and I'd like to see if I can make a hot migration, I mean the virtual machine won't stop running during the migration (but a few seconds of freeze is ok). I use a small cluster for my test : kvm1, kvm2, and kvmnfs. kvm1 and kvm2 runs the virtual machines kvmnfs is a NFS server, and it's mounted on /KVM on both kvm1 and kvm2 To migrate a VM (only RAM in fact) from kvm1 to kvm2, I run the same kvm command on kvm2 (with -incoming tcp:0:4444) that on kvm1, then I use "migrate -d tcp:kvm2:4444" : It works great, since the VM file is common to both machines. Now, I wan't to make a full migration (RAM + disk) of a local VM file (no more NFS) of kvm1 to kvm2. I tried to create an empty file, with touch, on kvm2 and use the same kvm command line + the "-incoming ..."). Then on kvm1 I use "migrate -d tcp:kvm2:4444" : It copies everything, then... the VM fails (any I/O disk gives an I/O error) ! And my VM file on kvm2, the one I created with touch, as still a size of 0 bytes. What am I doing wrong ? What is the exact command to use on kvm2 ? And what is the command to launch, in the monitoring mode, on kvm1 ?

    Read the article

  • Windows 7 Loading Very Slow

    - by Adnan
    Hi guys, I've had a problem that only started to occur yesterday. When I boot into Windows 7 and log on to my user account, the computer gets very laggy and slow for at least 5 minutes. Icons take ages to load, and everything is rendered unclickable. This happens for about five minutes after which everything goes back to normal. I tried restarting a few times to see if this is a recurring problem, and it is. I ran a full system scan with Microsoft Security Essentials and found nothing wrong, and I also defragmented the disk to increase performance. However, the problem still exists. Edit: For the past day, I've been trying to install Ubuntu on the same laptop. When installing it on a partition didn't work, I decided to use Wubi. Could this somehow be the problem? Also, my hard drive gets hot a lot, so could the heat be affecting the hard drive and maybe making it defective? Any help on this issue would be greatly, greatly appreciated.

    Read the article

  • Deny Access to directories from unauthorized users

    - by JasonS
    I am not being paid for this and I would like to know the quickest way to do the following. A former client has a page which only members can access. This page links to a number of galleries which he only wants members to access. The galleries are not protected by any kind of authentication. What I assume is the quickest way to do this is to create a .htaccess file which only allow people to view the site when the come from a certain referrer. Would this work? My current thinking is that I could use a php script to deploy a .htaccess file into each of the gallery directories. (There are around a 100 at the moment.) I found this link which might be what I am after but to be honest I really don't get it. Is my thinking sound?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >