Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 64/549 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Xcodebuild failing to pick up environment values from project file?

    - by egrunin
    I'm using Xcode 3.2.6, MacOSX. I have a globally visible environment setting: ICU_SRC=~/Documents/icu/source This really is an environment setting, it's set at login time. When I open up Terminal, it's there. In my project, under Header Search Paths I've added this: $(ICU_SRC)/i18n $(ICU_SRC)/common These expand correctly when I compile inside the IDE. When I look at the build results, I see this: -I/Users/eric.grunin/Documents/icu/source/i18n -I/Users/eric.grunin/Documents/icu/source/common When I build from the command line, however, it fails. What I see is this: -I/i18n -I/common Here's the command I'm using to compile: /usr/bin/env -i xcodebuild -project my_project.xcodeproj -target "my_program" -configuration Release -sdk macosx10.6 build What am I doing wrong? Edited to add: Apple explains Setting environment variables for user processes

    Read the article

  • Using TFS Team Build 2010 to deploy to Dev site and create packages for Staging and Production sites

    - by Kb
    I am trying to configure a TFS Team Build 2010 to perform automatic deployment to development environment and creation of deployment packages for staging and production environment. In the field for MSBuildArguments in the build definition I have: /p:DeployOnBuild=True <br/> /p:DeployTarget=MsDeployPublish <br/> /p:MSDeployPublishMethod=RemoteAgent <br/> /p:CreatePackageOnPublish=True <br/> /p:DeployIISAppPath=devwebsitename<br/> /p:MsDeployServiceUrl=http://deployserver/MsDeployAgentService<br/> /p:UserName=username<br/> /p:Password=password<br/> Automatic deployment of dev web site is ok and I get a package for the web site generated How can I (the same build) get deploy packages for the other environments : Staging and Production? Or am I missing som basic concept here?

    Read the article

  • Drupal: how to upgrade a running production website to a dev version?

    - by FractalizeR
    Can you help me to understand, how do I do Drupal website deployment and development? Suppose, I developed 1.0 version of Berty&Frank website. I copied everything to their production server and it is alive and kicking now. Site is already full of contents and is growing. I am asked to add additional features to the website. I am now experimenting with the way how I can implement them in a dev version. I am creating/deleting content types, fill created nodes with demo data just to see how they look like etc. Now I found the way and I want to upgrade production website to the same structure as my dev version now. How do I do that? Is the only way to manually make every change I made in dev version?

    Read the article

  • CodeIgniter Production and Developpement server on the same domain. (no subdomain)

    - by Onigoetz
    Hi, I googled this many times but now I have to ask it here. I want to make a workflow for a website for Developpement/Production. My constraint is that I use Facebook Connect (Facebook Graph now) so I need to have the dev and prod on the same domain and server. (to be able to log in and test the features) I thought I will edit the CodeIgniter Index.php to redirect if I have a specific user agent (I can edit the one of my firefox) You think it's a good Idea or you have a better one ? And now comes the eternal question : how can I deploy this the easy way ? should I use Capistrano or Phing ? or simply a script with SVN ? Please help me, I'm totally new to this Deployment thing. I used to work directly in production for my little websites or on other domains. but now it's not possible anymore.

    Read the article

  • Do you leave Windows Automatic Updates enabled on your production IIS server?

    - by Nobody
    If you were running a 24/7 website on Windows Server 2003 (IIS6). Would you leave the Windows automatic update feature enabled or would you turn it off? When enabled, you always get the latest security patches and bug fixes automatically as soon as they're available, which is the most secure choice. However, the machine will sometimes get automatically rebooted to apply the updates leading to a couple of minutes of downtime in the middle of the night. Also, I've seen rare occasions where the machine does not restart correctly resulting in further downtime. If auto updates are off, when do you apply the patches? I guess you have to use a load balancer with multiple web servers and rotate them out of the production site, apply patches manually, and put them back in. This can be logistically inconvenient when the load balancer is managed by a hosting company. You will also have machines in production that don't always have the latest security patches and you have to routinely spend time deciding which patches to apply and when.

    Read the article

  • I built my rails app with sqlite and without specifying any db field sizes, Is my app now foobared for production?

    - by Tim Santeford
    I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?

    Read the article

  • Model association changes in production environment, specifically converting a model to polymorphic?

    - by dustmoo
    Hi everyone, I was hoping I could get feedback on major changes to how a model works in an app that is in production already. In my case I have a model Record, that has_many PhoneNumbers. Currently it is a typical has_many belongs_to association with a record having many PhoneNumbers. Of course, I now have a feature of adding temporary, user generated records and these records will have PhoneNumbers too. I 'could' just add the user_record_id to the PhoneNumber model, but wouldn't it be better for this to be a polymorphic association? And if so, if you change how a model associates, how in the heck would I update the production database without breaking everything? .< Anyway, just looking for best practices in a situation like this. Thanks!

    Read the article

  • GlusterFS vs Ceph, which is better for production use for the moment?

    - by Mickey Shine
    I am evaluating GlusterFS and Ceph, seems Gluster is FUSE based which means it may be not as fast as Ceph. But looks like Gluster got a very friendly control panel and is ease to use. Ceph was merged into linux kernel a few days ago and this indicates that it has much more potential energy and may be a good choice in the future. I am wondering which(even out of the two?) is a better choice for production use? It would be nice if you could share your practical experiences

    Read the article

  • Redundancy and Automated failover using Forefront TMG 2010 Standard between Production-DR site ?

    - by Albert Widjaja
    Hi, I'm using MS TMG 2010 Standard as my single firewall to publish my Exchange Server and IIS website to the internet, however it is just one VM in the DMZ network with just one network card (vNIC), what sort of redundancy method that is suitable for making this firewall VM redundant / automatically failover in my DR site ? Because it is very important in the event of disaster recovery all important email through various mobile device will still need to operate and it is impossible if this TMG 2010 VM is offline. is it by using: 1. Multicast NLB 2. Any other clustering 3. VMware HA / FT (one VM in production, the other VM in DR site with different subnet ?) Any suggestion and idea willl be appreciated. Thanks

    Read the article

  • Client can't reach my production webserver. It's their ISP's fault, but now what?

    - by MikeN
    I have a customer in Michigan who can't access my production SaaS webserver that is hosted on Slicehost. All other companies across the US/Canada/Europe have no problem reaching the site. This problem is occuring intermittantly, and Slicehost customer service says it's a problem with the client's ISP. I got the IP address of my client, and ping'ing that IP address from my PROD server fails, but ping'ing the IP address from my dev box or our seperate blog server (also hosted on slicehost) works. How do I debug a problem like this? I asked the client to reach out to their local ISP and ask about this problem. A traceroute shows that the packets are getting stopped on a Comcast Michigan node which is the client's ISP. Is there anything I can do additionally to fix this problem for my client?

    Read the article

  • how to create stub DNS zone for emulating my customer production environment ?

    - by Albert Widjaja
    Hi All, Is it possible to emulate my customer production environment inside my AD domain by just creating the same domain inside my primary DNS server ? Can I created mycustomer.com DNS zone (STUB) just for the sake of listing few database servers and application servers and then for the other DNS records eg. MX, NS and the other refer to the REAL MX record entry so that my Exchange Server email flow is unaffected to mycustomer.com ? because if I just create A record in my current domain for some of the servers, the FQDN is not exactly what I want. Thanks.

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

  • What to have in sources.list on an Ubuntu LTS server (production)?

    - by nbr
    I have several Ubuntu 10.04 LTS servers in production and I'm using apticron to check that my software is up to date, security-wise. However, by default, Ubuntu has the lucid-updates repository enabled. This means lots of low-priority updates (such as this) that I don't need and thus, extra work for me. Is it okay to just remove the lucid-updates line(s) in sources.list? I still get security updates via lucid-security, right? So, this is what my sources.list would look like. deb http://se.archive.ubuntu.com/ubuntu/ lucid main restricted deb http://se.archive.ubuntu.com/ubuntu/ lucid universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe

    Read the article

  • What is the best time to schedule regular updates on inhouse production server?

    - by akira
    Given an inhouse server running in production mode I would like to keep the impact on the users as low as possible when deploying regular updates (to the server itself, not the user machines .. but that would be a pretty similar problem). The obvious answer to my question is "at night, when the users are at home". But "night" is a long period of time. Should one start early in the evening to perhaps catch problems with the update early on and be ready to rollback? Or is it better to start early in the morning and use the first users as "guinea pigs" to faster trigger the problems? Or in the middle of the night when the concentration of the one overseeing the update is pretty low but it is guaranteed to have no open file handles of some late working users? Are there any research papers on the topic?

    Read the article

  • MySQL: how to convert many MyISAM tables to InnoDB in a production database?

    - by Continuation
    We have a production database that is made up entirely of MyISAM tables. We are considering converting them to InnoDB to gain better concurrency & reliability. Can I just alter the myISAM tables to InnoDB without shutting down MySQL? What are the recommend procedures here? How long will such a conversion take? All the tables have a total size of about 700MB There are quite a large number of tables. Is there any way to apply ALTER TABLE to all the MyISAM tables at once instead of doing it one by one? Any pitfalls I need to be aware of? Thank you

    Read the article

  • Migrating a running production server to Xen, unmodified as a second HDD?

    - by DaveCol
    I have a production server which I am looking to virtualize via XEN. For this purpose I have purchased a new Sata HDD, in which I have promptly installed CentOS 5.5 x64 with XEN server installed. Now I have two HDD: /dev/sda1 running as host with Xen Server Installed; and /dev/sda2 which is the HDD where the original server has installed. Is it posible to use /dev/sda2 to work as GuestOS in a xen server? Would I have to modify its kernel? Thank you for any input

    Read the article

  • Data Masking for Oracle E-Business Suite

    - by Troy Kitch
    E-Business Suite customers can now use Oracle Data Masking to obscure sensitive information in non-production environments. Many organizations are inadvertently exposed when copying sensitive or regulated production data into non-production database environments for development, quality assurance or outsourcing purposes. Due to weak security controls and unmonitored access, these non-production environments have increasingly become the target of cyber criminals. Learn more about the announcement here.

    Read the article

  • WCF identity when moving from dev to prod. environment

    - by Anders Abel
    I have a web service developed with WCF. In the development environment the endpoint has the following identity section under the endpoint configuration. <identity> <dns value="myservice.devdomain.local" /> </identity> myservice.devdomain.local is the dns name used to reach the development version of the service. The binding used is: <basicHttpBinding> <binding name ="myBinding"> <security mode ="TransportCredentialOnly"> <transport clientCredentialType="Windows"/> </security> </binding> </basicHttpBinding> I am about to put this into production. The binding will be the same, but the address will be a new production address myservice.proddomain.local. I have planned to change the dns value in the configuration to myservice.proddomain.local in the production environment. However this MSDN article on WCF Identity makes me worried about the impact on the clients when I change the identity. There are two clients - one .NET and one Java using this service. Both of those have been developed against the dev instance of the service. The idea is to just reconfigure the endpoint used by the clients, without reloading the WSDL. But if the identity is somehow part of the WSDL and the identity changes when deploying to prod that might not work. Will the new identity in the prod version cause issues for the clients that were developed using the dev wsdl? Do the Java and the .NET clients handle this differently?

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • Can I setup a test server and then transfer everything to a diff. production server?

    - by Justin
    Hello, I am going to be setting up a "real" server, but it's not being shipped for another week. I was planning on setting up most of the server's functionality using an extra workstation I have. I wanted to set-up Windows Server 2003 or 2008, IIS, Terminal Services, Firewall, and Antivirus on this regular machine. I'd also be installing software like Winzip and VMWare that'll be used on the server. I can't ghost the machine, as far as I've done in the past, because the motherboard/cpu/etc. will all be different. Is there any way to export all of the "server settings" or something like that so I can move everything from test to production? Is there any software out there that does something similar to this? Some things I'm going to have to wait on such as setting up the file server completely in its raid configuration, but I'd like to get the simple server stuff and network setup out of the way. Has anyone done this before? Do I need software, open-source or not, to do this? Or maybe there's a way to export all the server settings in some way? Thanks in advance! Justin

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • How to install a desktop environment onto Ubuntu Server -- but without internet access or a CDROM?

    - by James
    I am playing around with a computer which has no CDROM drive or internet access and I have installed Ubuntu Server onto it. I have that all up and running nicely but now I'd like to install Xfce, GNOME or something similar so I can load up a desktop environment from the command line if I wish. Obviously with internet access or a CDROM, this would be a simple task of using apt-get and it finding & retrieving the packages for me, I assume, but I do not have either. I do however have a USB drive and I have used Unetbootin to make it into a bootable drive with the Ubuntu Server disk image files on there. I have mounted the USB drive to /media/usb0 and tried the command "sudo apt-cdrom add -d /media/usb0" to get apt to recognise the USb drive as an "Ubuntu CD" -- a source of package files but apt-get doesn't seem to be finding Xfce.. I try "sudo apt-get install xfce" and "sudo apt-get install xfce4" but neither find the package.. I would prefer to have Xfce but GNOME would be OK too.. My question is, am I doing something wrong? I figured that the Ubuntu Server disk (or rather, my Ubuntu Server USB drive) might not have any desktop environment packages on there so I tried the Xubuntu Desktop disk too (again, from my USB drive). I tried "sudo apt-get install xubuntu-desktop" but it couldn't find the package - even though it is listed under the /casper/ directory in some MANIFEST file. Anyone see where I'm going wrong? Maybe apt-get install is looking somewhere other than my USB drive? Maybe my commands are wrong? Maybe the disks don't even have the desktop environments on!? Thanks in advance guys, any input would be much appreciated. Cheers - James

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >