Search Results

Search found 6952 results on 279 pages for 'sharepoint deployment'.

Page 246/279 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • AMIs in Amazon EC2

    - by Jack of Trades
    I really like the Amazon EC2 environment, and thought I'll spend a bit of time playing around with various types of public (Windows!) AMI servers. But testing has been a bit, well, questionable. Some of my findings: It's very difficult to know what exactly a specific public EC2 image is supposed to be doing. Many images come with little to no information. I can't seem to find the passwords to log onto various windows images. Why are they public if they can't be used!? Lots of images are based on S3, and not EBS backed. This is very annoying, as S3 takes a lot longer to do pretty much anything (stop, image etc.) I am only testing images here, so of-course I don't question the value of S3 for other attributes. The description of what an image does is almost useless and many times confusing. Have others come across these EC2 issues. Again, my interest was to just play around with public images for testing/experimentation/etc, and therefore these issues may not be too relevant for more normal EC2 deployment uses.

    Read the article

  • Issues with duplicating TFS 2005 to a virtual server

    - by Hitchhiker
    We have a problem with our current TFS installation. For some reason, which I won't get into, the Sharepoint DBs (sts_content, sts_config) were upgraded to WSS 3 (MOSS). So now, none of our team-project sites work, we have no access to our documents and can't create new team projects. We can still work with the version control, though. We wanted to "play" with the server and try to fix it, without affecting the users. So we used p2v for VmWare, to duplicate the server to a virtual one. We now continued and changed all of the relevant configuration to point to the new server, as explained in the MSDN article. The step of rebuilding the warehouse (with setupwarehouse) failed. Also, we can't access the VersionControl web service (ourserver:8080/VersionControl/v1.0/repository.asmx). We are seeing errors in the EventLog: TF53002: Unable to obtain registration data for application Build. TF53005: Unable to retrieve the Team Foundation Server installed UI culture. TF53002: Unable to obtain registration data for application VersionControl. TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator. The solutions suggested in this blog post did not assist. So now we're kind of stuck. Any assistance will be appreciated.

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Azure load-balancing strategy

    - by growse
    I'm currently building out a small web deployment using VM instances on MS Azure. The main problem I'm facing at the moment is trying to figure out how to get the load-balancing to detect if a particular VM has failed and not route traffic to that VM. As far as I can tell, there are only only two load-balancing options: Have multiple VMs (web01, web02, web03 etc.) within the same 'cloud service' behind a single VIP, and configure the endpoints to be load balanced. Create multiple 'cloud services', put a single web VM in each and create a traffic manager service across all these services. It appears that (1) is extremely simplistic and doesn't attempt to do any host failure detection. (2) appears to be much more varied, but requires me to put all my webservers in their own individual cloud service. Traffic manager appears to be much more directed at a geographic failover scenario, where you have multiple cloud services across different regions. This approach also has the disadvantage in that my web servers won't be able to communicate with my databases on internal IP addresses, unlike scenario (1). What's the best approach here?

    Read the article

  • Issues with ASP.NET via Apache/mod_mono on Ubuntu.

    - by Matthew Scharley
    I run an Ubuntu test server, and my deployment system is also Ubuntu. I've recently been trying to get ASP.NET to work on my test server so that we can take it live. I managed to get it installed, and configured properly, and my application is installed and running, but I can't get anything to work. The error I keep receiving is below, if anyone has any clue what might be going on, it would be greatly appreciated. Server Error in '/' Application Standard output has not been redirected or process has not been started. Description: HTTP 500. Error processing request. Stack Trace: System.InvalidOperationException: Standard output has not been redirected or process has not been started. at System.Diagnostics.Process.CancelErrorRead () [0x00000] at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:CancelErrorRead () at Mono.CSharp.CSharpCodeCompiler.CompileFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at Mono.CSharp.CSharpCodeCompiler.CompileAssemblyFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromFile (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath, System.CodeDom.Compiler.CompilerParameters options) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] Version information: Mono Version: 2.0.50727.42; ASP.NET Version: 2.0.50727.42 Apache version String: Apache/2.2.11 (Ubuntu) mod_mono/2.0 PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch Server at dev Port 80 PS: I had to add three DLL's to the /bin directory in my application, copying them from Windows because I couldn't find them in any of Mono's packages. This might or might not be causing problems, I don't know. The list that I had to add is: System.Web.Abstractions System.Web.Routing System.Web.Mvc

    Read the article

  • Rails 3 passenger nginx application spawner server error on Synology NAS

    - by peresleguine
    Question updated, please read UPD2. I'm trying to deploy app through passenger nginx module on DS710+ (ruby 1.9.2p0 installed). There is syntax error relative to has_and_belongs_to_many_association.rb file. Please look at the screenshot(deleted, question updated). I'm pretty sure the problem isn't in library file. App is running good via webrick. Could you please advise what to look for? UPD1 ruby -v ruby 1.9.2p0 (2010-08-18 revision 29036) [i686-linux] gem list -d passenger *** LOCAL GEMS *** passenger (3.0.6) Author: Phusion - http://www.phusion.nl/ Rubyforge: http://rubyforge.org/projects/passenger Homepage: http://www.modrails.com/ Installed at: /usr/lib/ruby/gems/1.9.1 Easy and robust Ruby web application deployment UPD2 I've decided to reinstall everything. It solved previous problem but caused another one. The error is: The application spawner server exited unexpectedly: Unexpected end-of-file detected. Here is screenshot. New output: ruby -v ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux] gem list -d passenger *** LOCAL GEMS *** passenger (3.0.7) Author: Phusion - http://www.phusion.nl/ Rubyforge: http://rubyforge.org/projects/passenger Homepage: http://www.modrails.com/ Installed at: /usr/lib/ruby/gems/1.9.1 Nginx error.log: [ pid=5653 thr=32771 file=ext/common/Watchdog.cpp:128 time=2011-04-20 14:08:34.505 ]: waitpid() on Phusion Passenger helper agent return -1 with errno = ECHILD, falling back to kill polling [ pid=5654 thr=49156 file=ext/common/Watchdog.cpp:128 time=2011-04-20 14:08:34.506 ]: waitpid() on Phusion Passenger logging agent return -1 with errno = ECHILD, falling back to kill polling 2011/04/20 14:12:33 [notice] 7614#0: signal process started

    Read the article

  • IIS High use & Server Performance issues

    - by HaydnWVN
    Have an SBS2011 running Exchange, a database app and a few other things serving 5 users (3 low use, 1 high). The server was never specced for the database app so it isn't as powerful as I'd like... Only 12GB RAM. We have increasingly found performance problems with this server, last week it was so bad I couldn't even connect remotely. To free up some available RAM I have (over the past month or so): Restricted the Exchange Message Store to 1GB with (so far) no ill effects. Restricted SQL Databases (including SBSMonitoring and Sharepoint/##SSEE (Which isn't used)). Now I am finding that IIS Worker threads are using up the available memory and I have (so far) been unable to track down much useful information about restricting them. This server is not 'serving' anything web-based apart from OWA that I am finding people using because Outlook is so slow (again related to the Servers performance). I am aware that Exchange on SBS2011 is designed to use up available resources (and concede when other applications request). But it is not doing so (or anywhere near fast enough) for our needs. Opening the database application (using Postgres) takes 5+ minutes from client machines and regularly times out or crashes due to this. After a reboot (before SQL/Exchange/IIS databases are very large/totally cached) we get the performace we need and expect. Previously a reboot once a month was enough... Then once a week... Now they have taken to rebooting it almost daily!

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • How-To Configure Weblogic, Agile PLM and an F5 LTM

    - by Brian Dunbar
    Agile, Weblogic, and an F5 walk into a bar ... I've got this Agile PLM v 9.3 Running on WebLogic, two managed servers. An F5 BigIP LTM. We're upgrading from Agile v 9.2.1.4 running on OAS. The problem is that while the Windows client works fine the Java client does not. My setup is identical to one outlined in F5's doc: http://www.f5.com/pdf/deployment-guides/bea-bigip45-dg.pdf When I launch the java client it returns this error "Server is not valid or is unavailable." Oracle claims Agile PLM is setup correctly, but won't comment on the specifics of the load balancer. F5 reports the configuration is correct but can't comment on the specifics of the application. I am merely the guy in a vortex of finger-pointing who wants my application to work. It's that or give up on WLS and move back to OAS. Which has it's own problems but at least we know how it works. Any ideas?

    Read the article

  • 500 error with deploying rails application via apache2+passenger

    - by user1633983
    I finally completed my own app, so the only work left is deploying the app. I'm using Ubuntu 10.04 and apache2(installed by apt-get), so I'm trying to deploy through passenger. I installed passenger gem like this: sudo gem install passenger rvmsudo passenger-install-apache2-module and I configured apache settings as what the installation message says. I added below lines in the middle of /etc/apache2/apache2.conf file. LoadModule passenger_module /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17/ext/apache2/mod_passenger.so PassengerRoot /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17 PassengerRuby /home/admin/.rvm/wrappers/ruby-1.9.3-p194/ruby and, I appended below lines in /etc/apache2/sites-available/default file. <VirtualHost *:80> ServerName localhost # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /home/admin/homepage/public <Directory /home/admin/homepage/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> But when I restart the apache service and hit the address, 500 error occurs. At first, it was same 500 error but the 500 error page is from apache's, but when I reinstalled the libapache2-module-passenger, the 500 error page is changed to that from rails'. Because of rails' 500 error page(which is located at public/500.html), I think passenger module is properly connected with apache. What should I do to fix this problem? Do I need to configure something inside my app before deployment?

    Read the article

  • Migrate Domain from Server 2008 R2 to Small Business Server 2011

    - by josecortesp
    I'm looking for some advice here, rather than the big how to do it I'm looking for what do to I have this home server, quad core and 4 GB of ram (I really can't afford more right now). With a Windows Serve 2008 R2 With ActiveDirectory and a Hyper-V-Virtual machine with SharePoint, TFS and a couple of more thigs. I have a least 10 remote users, all of them joined a Hamachi VPN (working great by the way). But I want to migrate that to a Small Business Server 2011 Standard. I tried to make a VM to join the domain and then promote that VM, back up it and then format the physical server, boot up the VM, Promote the Phisical and then erase the VM, but I can't do that because of SBS requiring a least 4 GB of ram to install (so I can't give all the 4 GB of physical ram to a VM). I was thinking in using a laptop (All the clients are laptop) as a temporal server, join the domain, promote it, then format the server and install SBS on the server and do all again. I really need some advice. Thanks in advance. BTW, I know that the software I'm using is kindda expensive, and I can't afford more hardware. I have access to MS downloads by a University partnership so I have all this software for free.

    Read the article

  • Uninstall IIS on Windows 7

    - by CJM
    I've just rebuilt my development machine and installed IIS. I then installed the Web Deployment tool and used this to restore my previously-backed-up websites to the clean machine. Unfortunately the restoration didn't work correctly/fully. I couldn't easily correct the problem, so I decided to uninstall/reinstall IIS and recreate the sites manually. I uninstalled IIS and rebooted, but there was still plenty of stuff left around such as various files in /windows/system32/inetsrv/ which I tried to delete manually (with limited success!). I rebooted again and tried to reinstall IIS - it reported an error (no meaningful message) and requested another reboot. The event log includes the following errors: The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://*:80/gallery for site 1. The site has been disabled. and Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. I'd like to avoid another rebuild. Can I completely remove IIS, such that I can reinstall it from scratch? Or can I 'fix' the current setup so that IIS will reinstall over what is already there?

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • Struggling to set-up NLB cluster

    - by Chris W
    I'm trying to set up NLB on a couple of Windows 2008 R2 virtual servers running on top of Hyper V R2. The servers each have a single vNIC for LAN access (and a second vNIC for SAN access). I'm setting up the cluster to use Multicast mode. The vNICs are each set to allow MAC spoofing. Essentially I'm finding that i can add SERVER1 as a host and it will pick up and respond to the cluster IP from a remote subnet. If I then 'stop' the node in NLB manager it still responds when I would expect it to stop answering on that IP. If I recreate the cluster and add SERVER2 as the first host, the wizard completes correctly and an IPCONFIG on the server shows that it now has the cluster IP but I can't ping the cluster IP from a remote subnet but I can from another machine on the same subnet. As a final test - with both servers in the cluster, pinging from another machine on the same subnet I still get a response from the cluster IP when both nodes are stopped according to the NLB manager. The two VMs are sat on the same physical blade and are built up exactly the same as they'll be used as SharePoint web front end servers. I'm at a loss as to what could be wrong with the second VM that prevents it taking on the address just as the sole node in the cluster, never mind the strange behaviour of the cluster when I stop/start nodes.

    Read the article

  • Apache2: How to split out the SSL configuration?

    - by Klaas van Schelven
    In Apache2, I'd like to separately define my SSL-related stuff once, and in a separate file from the rest of the configuration. This is mostly a matter of taste, but it also allows me to include the rest of the configuration in my automatic deployment process. I.e.: current situation: # in file: 0000-ourdomain.com.conf (number needs to be low) <VirtualHost xx.xx.xx.xx:443> # SSL part SSLEngine on SSLCertificateFile ....crt SSLCACertificateFile ...pem SSLCertificateChainFile ...intermediate.pem SSLCertificateKeyFile ....wildcard.ourdomain.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ServerName www.ourdomain.com ServerAlias ourdomain.com # the actual configuration, as found for xx.xx.xx.xx:80, repeated </VirtualHost> I'd like # in file: 0000-ssl-stuff <VirtualHost xx.xx.xx.xx:443> # SSL part SSLEngine on SSLCertificateFile ....crt SSLCACertificateFile ...pem SSLCertificateChainFile ...intermediate.pem SSLCertificateKeyFile ....wildcard.ourdomain.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ServerName www.ourdomain.com ServerAlias ourdomain.com </VirtualHost> # in file: ourdomain.com.conf <VirtualHost xx.xx.xx.xx:443> # the actual configuration, as found for xx.xx.xx.xx:80, repeated </VirtualHost> Unfortunately, this does not seem to work. Apache SSL fails, though it does not give an error message at reload or syntax-check. My best found workaround is to us an Include directive from the 0000-ssl file. Many thanks!

    Read the article

  • IIS 6 + ASP.NET web service - DW20 and stackoverflow exception

    - by pcampbell
    Consider an ASP.NET SOAP web service that starts up fine, but craters hard when receiving its first hit. Please note that this is deployment works in the Test environment, but not in the PreProd environment. Both are Windows 2003 SP3 + IIS 6 + ASP.NET 3.5. All up-to-date. The behaviour that we're seeing is: restart the site & app pool the app pool is configured to run under Network Service. browsing to the .asmx and .wsdl responds normally, as expected. send a normal well-formed SOAP request / normal payload to the web service 100% CPU usage after 5 seconds, the page request / site returns "Service Unavailable" no entry is created in the IIS log file (i.e. c:\windows\system32\logfiles\W3C-foo) the app pool ends up being stopped The processes that hit the CPU hard are dw20.exe. I am unsure why Dr Watson is involved here. Event Log shows an ASP.NET Runtime error: Task Manager: Event log text: EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d6968e, P4 errormanagement, P5 1.0.0.0, P6 4b86a13f, P7 24, P8 0, P9 system.stackoverflowexception, P10 NIL. Questions Any thoughts on what this system.stackoverflow exception might be? Given that the code is the same between environments, might it be a payload problem? Could it be a configuration issue? You can see the name of my .NET assembly there in the exception message: "ErrorManagement"

    Read the article

  • Why can't I copy .zip files from a server to a server in a different domain?

    - by Kyralessa
    At work, we're using a Windows Server 2008 R2 VM as our build server. At the end of the build process for any of our projects, we copy the packaged deployment files to a folder on the server where they'll be deployed. (This is done in a batch command by a service account.) For most of our projects, which deploy to a Windows Server 2008 R2 VM, this step goes swimmingly. But for one project, which deploys to a Windows Server 2003 R2 VM which resides in a different domain on our network, the .zip files return "Access is denied" and don't copy, though all of the other files copy correctly. Our sysadmins say they haven't prevented this in group policy or by other means. If I'm logged in the build server as myself and run the copy in the command window, I can't copy the .zip files over either, so it's not just a matter of the service account's permissions. If I log into the 2003 server and then copy from the build server to the 2003 server, using the command window, it works, whether I run as myself or as our service account. Only .zip files cause the "Access is denied" problem. Even a (fake) .exe file copies correctly. All of our other projects have .zip files, and they copy to their 2008 R2 server correctly. Is there a way I can get the Windows Server 2003 R2 VM to accept .zip files copied from our build server?

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • Group Policy Software Installs Too Silent on Windows 7

    - by jonblock
    I'm trying to migrate a Windows XP deployment process to Windows 7. The process has been surprisingly smooth, after figuring out how to bring up a base system. We rely heavily on Group Policy software installation, which in XP can mean long periods on any given morning sitting around watching the machine install new updates. At least the typical Windows Installer message shows the user that something is indeed happening. As far as I can tell, Windows 7 retains the startup installation process (good), but eliminates the on-screen message to indicate what's happening (bad). All a user will see, possibly for a half-hour or more if they haven't restarted for a while, is the electric hamster wheel and the words "Please wait...". I forsee a significant increase in support calls... If you're familiar with msiexec.exe parameters, XP behaves like /qb-, and 7 behaves like /qn. I want the /qb- behavior back. Is there a way to re-enable the Windows Installer notices for Group Policy startup installations?

    Read the article

  • Macbook Pro - 15" with i7 processor - Any problems with heat?

    - by webworm
    You may have already heard about the review done by the folks at PC Authority in Australia, where they had an i7 MacBook Pro that got up to 100 degrees Celsius during benchmarking. Here is the URL in case you have not read it. http://www.pcauthority.com.au/News/172791,macbook-pro-helps-core-i7-hit-100-degrees.aspx In any case, I was considering purchasing a 15" Macbook Pro with the i7 processor and the NVIDIA GeForce GT330M with 512 video memory. Having read how hot the computer got I started to become hesitant about purchasing. My main concern is long term damage to the computer due to excessive heat. I plan to use the MacBook Pro as a development machine where I will be running Windows 7 within VMWare Fusion or Virtual Box. Within the VM I will be running IIS, SQL Server, Visual Studio and SharePoint Server. Hence why I would like to have the power of the i7 processor. That is why I wanted to check with actually owners of the MacBooks with the i7 processor and see what their experiences have been. Have you noticed excessive heat? How does your Macbook handle process intensive apps over long periods of time? Thank you!

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >