Search Results

Search found 3844 results on 154 pages for 'deployment methodologies'.

Page 98/154 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • IIS 7.5 401.3 Access Denied

    - by Jeffrey
    I am having this weird issue with IIS 7.5 on Windows 2008 R2 x64. I created a site in IIS and manually created a test file index.html and everything worked. When I try to do a deployment, I copy all the files from my local PC to the IIS server, try to access index.html (this is the proper deployed file) and getting 401.3 access denied error. I then try to manually recreate index.html and copy content into this newly created file and the page is accessible again... I just can't figure this out. So the issue is that IIS 7.5 can't server files that have been copied from other PCs. I tried to reset/apply permission settings to the copied folders/files but nothing has worked. Please help. Thanks! By the way, the files that I copied are just some html cutups i.e. generic html, css and image files, nothing special.

    Read the article

  • OSSEC agent behind NAT

    - by Eric
    I am working on an OSSEC deployment where I will have multiple agents behind 1 public IP. Below is an example of the setup Private Network OSSEC-Agent1 (192.168.1.10) OSSEC-Agent2 (192.168.50.33) OSSEC-Agent3 (10.10.10.1) Those IPs NAT to 1 public IP (1.1.1.1) Then 1.1.1.1 talks to the public OSSEC server on 2.2.2.2 I've read some OSSEC documentation talking about NAT here, but it doesn't tell me exactly what I need to know. Their example is using an entire /24 subnet and mine will mainly have multiple agents to only 1 public IP. With the setup so far, I brought Agent1 online fine and it is communicating to the OSSEC server. However Agent2 continues to fail trying to connect to 2.2.2.2. Even though when I added the key, I had the correct name for it, so I know it talked to the portal at least once for that information. I'm assuming it's just getting confused with the multiple keys to 1 public IP. I basically want to know if this is possible and/or if I'm just overlooking something simple. Any help would be greatly appreciated.

    Read the article

  • Mail Enabled Sharepoint 2010 Loop Detected

    - by vlannoob
    I have setup a small Sharepoint 2010 deployment and it is working fine, for now. I have run through one of the more popular step by step guides to mail enable the install and what I have is internal and external mail going to my mail enabled list hitting my Exchange 2010 server (on another Win2k8R2 box) and sitting in the submissions queue with a Loop Detected error and they progres no further. Everything appears OK as per the guide. I have setup an SMTP role on the Sharepoint box, as per the guide. I have setup a new Send Conenctor on the Exchange 2010 server, as per the guide. Any ideas on troubleshooting here?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • PXE in an 802.1X environment

    - by newmanth
    My organization is about to implement 802.1X on our enterprise, but we currently use PXE-based OS deployment sequences in SCCM. I'm looking for a way to continue using PXE in an 802.1X environment. Our infrastructure uses Cisco network gear running at 12.2 (or newer). We are an all Windows network and all clients support 802.1X. All new workstations have Intel AMT available (but not factory configured). In a worst case scenario, we'll use a guest vlan for OSD, but I'd rather have the OSD occur in an authenticated session. I've seen white papers that describe using AMT to act as a supplicant for PXE boot, but can't find any implementation details...

    Read the article

  • Deploying new code live

    - by nicoX
    What's the best practise to deploy new code on a live (e-commerce) site? For now I have stopped apache for +/- 10 seconds when renaming directory public_html_new to public_html and old to public_html_old. This creates a short down-time, before I start Apache again. The same question goes if using Git to pull the new repo to the live directory. Can I pull the repo while the site is active? And how about if I need to copy a DB as well? During the tar (backup purpose) compression of the live site I noticed that changes occurred in the media directory. That indicated to me that files keep on changing periodically. And if these changes can interfere if Apache is not stopped during deployment.

    Read the article

  • Flexible classroom environments (OS, Office)

    - by HannesFostie
    I work in the IT department of a training center, we still offer XP and Office 2003 trainings but also offer Vista and Win7 and Office 2007. Currently, we use VMs on VMware Server but this is obviously not a superb choice. We're thinking of implementing something like VDI (brainstorm phase, we hardly have any details) but I decided to check here if people would have some clever alternatives. Requirements: * Flexible when it comes to deployment * Centralized management would be a big plus * Allow for different software, whether they be compatible or not (all of office except for outlook can be installed simultaneously. for outlook you need to choose between 2003 or 2007) * Allow for different OS We have a big enough budget to implement a proper SAN environment to accomodate the virtualization of the solution, whatever kind it may be. A support contract will probably be necessary as well, because we need to be able to offer quick solutions to problems and with only 2 sysadmins that is simply impossible to guarantee.

    Read the article

  • What are the benefits of running chef-server instead of chef-solo?

    - by strife25
    I am looking at automated deployment solutions for my team and have been playing with Chef for the past few days. I've been able to get a simple web app up an running from a base Red Hat VM using chef-solo. Our end goal is to use Chef (or another system) to automatically deploy application topologies to the cloud as we run builds. Our process would basically run like so: Our web app code, dependencies, and chef cookbooks are stored in SCM A build is executed and greats a single package for images to acquire and test against The build engine then deploys new cloud images that run a chef client to get packages installed. The images acquire the cookbooks from SCM or the Chef server and install everything to get up and running What are the benefits and/or use cases for getting a Chef Server running? Are there any major benefits to have a Chef Server hold and acquire the cookbooks from SCM vs. using chef-solo and having a script that will pull the cookbooks from SCM?

    Read the article

  • IIS 7 - Provisioning portal

    - by Doug
    I am wanting to setup our production IIS environments with a provisioning portal to ensure that deployment staff always setup sites in a uniform configuration, and that they don't actually have remote access to the servers directly. What is the best 'simple' provisioning tool for such a purpose? Do people write their own using something like Powershell remoting? I don't want to install a tool like HELM or similar as it feels like it creates unnecessary bloat on top of a production environment. features should include: create new website and app pool combo restart, start and stop application pools change bindings on websites

    Read the article

  • Upgraded from VS 2008 -> VS 2010. Can't Connect to SQL Server in Staging Environment

    - by Bob Kaufman
    I have a test application written in C#/ASP.NET that I've developed using Visual Studio 2008 Professional/.NET 3.5 which connects to a local SQL Server 2008 Express instance. I upgraded the development machine to Visual Studio 2010 Professional maintaining .NET 3.5 and everything in the development environment continues to work correctly. Upon deployment of the new app to an internal staging machine, that app cannot connect to its local SQL Server 2008 Express database. I get the customary "server not found" error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible... Does something need to be upgraded on the staging machine to be able to host a Visual Studio 2010/.NET 3.5 application?

    Read the article

  • Is using Capistrano for user maintenance tasks on university lab feasible?

    - by danielkza
    I've been looking around for tools to replace some legacy scripts for creating and maintaining accounts in a university computer lab ecosystem consisting of things like: LDAP and Kerberos for authentication User home storage and web pages Entries on an SQL database Printing quotas Mailing lists, etc. I'd also like to automate machine and VM membership for Kerberos and Puppet if possiible. I've found Capistrano, and while the basic principle of running tasks on remote hosts through SSH seems to fit, and the DSL in Ruby looks quite nice, I've found most documentation is related to application deployment, not generic tasks. I'm also not aware of any good way to parameterize tasks so I can pass on the user information for creation. Is something about Capistrano I am missing, or is it not the correct tool for this job? Are there any more userful alternatives?

    Read the article

  • Advise on VMWare hardware requirements and host OS

    - by edwin.nathaniel
    Hi All, I'm a newbie developer wanting to learn a bit about Virtualization (from the IT point of view, not theoretical/academic). What I'd like to do: Prepare a machine Install VMWare or VirtualBox Prepare 3 Guest OSes (one for Win2k8 server, 2 for Ubuntu Server) Win2k8 will run SQL Server 2k8 and IIS (for ASP.NET MVC deployment) 1 Ubuntu Server for Drupal, SugarCRM, MediaWiki, typical LAMP stuff 1 Ubuntu Server for Java (Tomcat/Jetty + MySQL/PostgreSQL) What I'd like to know: What would be the ideal Host OS such that the Host OS should not spend too many resources on itself but should boost these instances of VMs (e.g: does Win2k8 performs better vs Linux?) What would be the ideal machine for this (preferably AMD base chip) I'm not expecting the best performance out of this setup, just a decent one to host one drupal instance, one ASP.NET MVC (future, not now), and one Tomcat/Jetty instance. NB: If you have a better suggestions on the setup, feel free to let me know (e.g: maybe Drupal and Tomcat can be in one instance but move the database to another instance instead of 1 instance map to 1 webserver and 1 dbserver). Thank you.

    Read the article

  • Ruby, Rails & MySQL parity between Mac Client (10.6) & XServe (10.5)

    - by Meltemi
    We're setting up a RoR setup with Development on Mac OS X Client (10.6.3) and then using a Mac OS X Server (10.5.8) for testing and eventually deployment. I'd like to get as many systems in sync on these machines as possible. Wondering if there are any pitfalls. I seem to understand what's necessary under Client but Server has some hardwired stuff that I want to make sure doesn't break...or is updated correctly. Currently installed on both machines we have: OS X Client (10.6.3): Ruby 1.8.7 Rails 2.3.5 MySQL (not installed yet) OS X Server (10.5.8): Ruby 1.8.6 Rails 2.3.5 MySQL Ver 14.12 Distrib 5.0.82 Any suggestions...Ideally from someone who's done this on Leopard Server as well but I'll listen to general tips & proceedures

    Read the article

  • SCVMM upgrade scenario

    - by pigeon
    I've read some information on TechNet about upgrading SCVMM 2008 - 2012 but can't quite figure out the best way to approach this. The current setup is that we've got SCVMM 2008 R2 installed but against best practice it was actually installed on the Hyper-V host machine since its a small scale deployment its just a single server setup with SCVMM existing on the same host rather than be in a VM. So from what I've read an in-place should be possible which will incur a restart but also don't have the luxury of another server to shift the VMs onto whilst doing this or want to risk anything happening to the Hyper-V role. Ideally I would probably prefer just to get SCVMM 2012 into a VM of its own and remove the 2008 version from the host machine. Anyone done an upgrade on this or have any recommendations about how to approach this?

    Read the article

  • What are the "must " security tools for small organizations ?

    - by Berkay
    One of my friend has started a company, it's a small-scale company that has a 40 workers.There is two guys also responsible for the security and IT related issues.He is managing the LAN, Webpage of the company, e-mail configuration, printer hardware modification, application deployment etc. In this point, to provide the security measures including access controls, authentication, web server security etc. Which tools do you use for securing, monitoring and controlling the system ? Are you paying for these tools or are they open source? This question is due to the security administrators requests to my friend.He offers to get some tools for the company and my friend hesitates to pay that much on them (what he mentioned me.)

    Read the article

  • Choosing between cloud (Cloudfoundry ) and virtual servers - for developers

    - by Mike Z
    I just came across some articles on how to setup your own cloud using Cloudfoundry and Ubuntu, this got me thinking, choosing our infrastructure, if we want to use our own servers what's the advantage of cloud on virtual servers vs just using virtual servers, VPN? If we now develop for the cloud later if we need help we can quickly move on to a cloud provider, but other than that what's the advantage and disadvantage of private cloud in these areas? speed of development, testing, deployment server management security having an extra layer (cloud) will that have a hit on server performance, how big? any other advantage/disadvantage?

    Read the article

  • Web Deploy to IIS7 fails with 401 (Unauthorized)

    - by Trex
    we have IIS7 running on Windows Web Server 2008 R2 and it's set up to support Web Deploy. It worked fine when we used the default Administrator account. We recently disabled this account (for security reasons) and are now trying to deploy using another account which is member of the Administrators group, but the deploy fails with 401 (Unauthorized). More specifically, it says: Connected to '<IP>' using Web Deployment Agent Service, but could not authorize. Make sure you are an admin on '<IP>'. The remote server returned an error: (401) Unauthorized. Anybody has any ideas why this is happening? Thanks. Trex

    Read the article

  • moving from Exchange 2003 to Exchange 2010

    - by pcampbell
    Consider a small-medium business' deployment of Exchange 2003. The question is around migrating to Exchange 2010. Here's a bit about the landscape: Current state is 50-100 users/mailboxes with the majority using Outlook 2007 OWA enabled desktop users are NOT running in Cached Exchange Mode laptops users ARE running in Cached Exchange Mode a single Exchange server with modest or reasonable specs for the day (3gz, multi-core, 4gb, Win 2003 32-bit) Questions Do you have any suggestions for the admin team regarding the upgrade path/steps from Exchange 2003 to 2010? Considering the requirement of a 64 bit OS, consider a new separate machine as ready to go with Win 2008. Have I missed any details? Where might virtualization help in this project? Any lessons learned in previous upgrades (2007 or 2010) would be appreciated!

    Read the article

  • Starting/Stopping IBM WebSphere Application Server (WAS) 7 from the Command Line

    - by Christopher Parker
    I've written a script to automate the process of starting, stopping, and restarting WAS7 from the command line. Nothing starts automatically on one of our staging servers, so I have to start everything: deployment manager, node agent, app server, and Web server. The script I wrote seems to work pretty well. A coworker of mine recommended that I structure my commands differently. I'm wondering if there's a good, valid reason for doing so. First, my variables: WAS_HOME="/opt/IBM/WebSphere/AppServer" WAS_PROFILE_NAME="AppSrv01" WAS_APP_SERVER="server1" WAS_WEB_SERVER="webserver1" How I had the start commands: "${WAS_HOME}/bin/startManager.sh" "${WAS_HOME}/bin/startNode.sh" -profileName $WAS_PROFILE_NAME "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_APP_SERVER "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_WEB_SERVER I was told that I should do it like this, instead: WAS_DMGR="Dmgr01" # Added variable "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startNode.sh" "${WAS_HOME}/profiles/${WAS_DMGR}/bin/startManager.sh" "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_APP_SERVER "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_WEB_SERVER How is the second way of starting up everything for WebSphere any better or more correct than the first, original, way?

    Read the article

  • Virtualbox PXE Boot Failing with a Windows Server 2008 R2 Server

    - by Vbitz
    Some fast help on this would be good, I have been on this problem for 14 hours. In a Virtualbox test environment I have 2 virtual machines networked together using a internal network (no traffic runs though the host, it is all at a software level). One is a fresh client with 512mb of ram and a dual core set-up, the other is the server with 1.5GB of ram and running server 2008 r2. The server is configured as a dns server, dchp server, domain controller and also serves PXE booting though WDS (Windows Deployment Services). Both machines can see each other and I am able to start a network boot. The issue comes at the second to last stage of the pre windows PE install. On TFTP download of boot.sdi it starts it but stops during the boot process.

    Read the article

  • HP DL180 G6 P410 8x SATA 1TB, what is the optimal configuration?

    - by Oneiroi
    I have a HP DL180 G6 with a P410 raid controller. Presently this runs using 4x 1TB Samsung Spinpoint SATA drives, in a RAID10 configuration using default settings. I am about to add a backplane to increase the drive capacity from 4 to 12 drives, and I plan to install 4 more 1TB SATA Drives. The drives are matched and have close serial numbers (They arrived together in the Manufacturers pallet). Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s I will also be installing RHEL 6.1 x86_64. My question is what would be the optimal RAID settings (stripe etc.) for this configuration? To recap: 8x Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s RAID 10 configuration. Thanks in advance. Update for role: Server is to become an iscsi target for an internal openstack deployment currently underway. (Glance) Will also provide virtualisation through KVM

    Read the article

  • NFSv3 Asynchronous Write Depends on Block Size?

    - by Joe Swanson
    I am trying to figure out if my NFSv3 deployment is performing SAFE asynchronous writes. I suspect that it is doing strictly synchronous writes, as I am getting poor performance in general. I used Wireshark to look at the 'stable' flag in write calls, and look for 'commit' calls. I noticed that, with especially large block sizes, writes to appear to be performed asynchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=2097152 count=512 However, smaller block sizes appear to be performed strictly synchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=8192 count=655360 What gives? How does the client decide whether to tell the server to perform writes synchronously or asynchronously? Is there any way I can get smaller block sizes to be performed asynchronously?

    Read the article

  • Amazon RDS Pros/Cons of Multiple DBs per instance

    - by Joe Flowers
    I run two completely independent websites. I am moving their MySQL databases to Amazon RDS. I'm not going to do Multi A/Z deployment - let's remove that variable from this question. I'm not sure whether to create a single RDS instance with two databases, or two Amazon RDS instances with a single database. Ignore cost for the sake of this question. I will not hit the 1 TB data limit so let's ignore that. However, it is extremely important that crashing one of the websites doesn't impact the other. Based on this document - http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/Concepts.DBInstance.html I'm assuming that if I write terrible code that crashes one of the databases in a given RDS instance, it could possibly take down the entire RDS instance (and thus inadvertantly affect the other database). Is that correct? Thanks

    Read the article

  • django wsgi multiple projects different url same apache server

    - by Thomas Schultz
    Hello, I'm trying to get 2 separate django projects running on the same apache server with mod_wsgi that are also under the same domain but different urls. Like www.example.com/site1/ and www.example.com/site2 What I'm trying to do is something like... <VirtualHost *:80> ServerName www.example.com <location "/site1/"> DocumentRoot "/var/www/html/site1" WSGIScriptAlias / /var/www/html/site1/django.wsgi </location> <location "/site2/"> DocumentRoot "/var/www/html/site2" WSGIScriptAlias / /var/www/html/site2/django.wsgi </location> </VirtualHost> The closes thing I've seen is this http://docs.djangoproject.com/en/dev/howto/deployment/modpython/ but "mysite" is different for both of these cases and they're using mod_python instead of mod_wsgi. Any help with this would be great thanks!

    Read the article

  • Configure custom SSL certificate for RDP on Windows Server 2012 in Remote Administration mode?

    - by Ryan Bolger
    So the release of Windows Server 2012 has removed a lot of the old Remote Desktop related configuration utilities. In particular, there is no more Remote Desktop Session Host Configuration utility that gave you access to the RDP-Tcp properties dialog that let you configure a custom certificate for the RDSH to use. In its place is a nice new consolidated GUI that is part of the overall "edit deployment properties" workflow in the new Server Manager. The catch is that you only get access to that workflow if you have the Remote Desktop Services role installed (as far as I can tell). This seems like a bit of an oversight on Microsoft's part. How can we configure a custom SSL certificate for RDP on Windows Server 2012 when it's running in the default Remote Administration mode without needlessly installing the Remote Desktop Services role?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >