Search Results

Search found 10206 results on 409 pages for 'tooling and testing'.

Page 226/409 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

  • iSCSI SAN RAID 10 Performance -- Poor Read, Good Write

    - by Litzner
    I have a EqualLogic PS4000 SAN unit with the latest firmware, setup in RAID 10. I have 3 2TB Volumes on the SAN shared out via iSCSI on 2 eth ports on two different subnets. I have moved a test server over to this newly setup SAN, and my testing is showing me a problem. I am getting dismal read performance in everything except a test with 32 queue depth (see attach image) Write performance seems to be right about where it should be. I have tried MPIO on and off, on was slightly better but not much.

    Read the article

  • Troubleshooting mailserver (Postfix, Dovecot) on Ubuntu Server 9.10?

    - by Christoffer
    I have configured a mail server with Postfix and Dovecot on Ubuntu Server 9.10. I followed the guidelines here (using Maildir): https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/Dovecot The tests seemed alright so I connected it to GMail which is able to connect and fetch e-mails. But since there's no e-mail in the Maildir/ directory I can't decide if the problem is Postfix or Dovecot. And I am totally new to mailservers so I don't know where to start troubleshooting. So, I want to start by testing Dovecot. How can I create a fake "Hello World"-email directly on the server (using a text editor) so that I can try to fetch it with GMail? If Dovecot is alright, where do I start looking for errors in Postfix? Thank you for your time. Christoffer

    Read the article

  • Is email forwarding to the sender's address usually blocked in Mail servers / MTA ?

    - by codecowboy
    I've noticed that email forwarding to an address seems not to work if I send an email from the address to which I am forwarding email. This happens for GMail and Fasthosts mail servers. e.g I send an email to [email protected] from [email protected] , [email protected] is set to forward to [email protected] and the email never arrives. I realise this seems logical but it is a potential cause of confusion when testing email functionality in a web application (for me, anyway ;-). I would just like to know if this is standard for all MTA software so I can avoid confusing myself.

    Read the article

  • How can I simulate a slow machine in a VM?

    - by Nathan Long
    I'm testing an AJAX-heavy web-application. I develop on a new Mac, but I use VmWare Fusion (currently 3.1.2) to test in Windows XP, using IETester to simulate older versions of IE. This lets me see how older IE versions would render the site, but I'd also like to see how the site would perform on an older machine. I see in the VM's settings that I can decrease the RAM; is there a way to also dial down the processor speed? How else might I simulate a slow machine? (I am also going to check out how to simulate a slow internet connection.)

    Read the article

  • Behaviour of nginx as proxy

    - by HD
    I'm testing nginx with different configurations to replace an architecture working with squid + apache. I know that I can use nginx to manage static requests and load balancing but I'm interested in one particular solution that I don't understand clearly: I'm using 2 nginx servers (balanced) with the proxy_pass setting to pass all requests to an apache server. When one client makes a request to the site one of the nginx servers process it and send it to the apache server. Now, how this behaviour could be an improvement to my system?, it seems that all requests are passing through apache and I don't see benefit at all. What happens when 100 simultaneous connections pass through nginx? The 100 connections will be going to the apache server or is some kind of internal behaviour that allows an small impact into apache?

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • Windows 7 - You don't have permissions to save in this folder

    - by James
    Huh? I'm getting this message - "You don't have permissions to save in this folder" - even though I am the only user on this machine, and administrator. How can I set permissions for myself to do everything, everywhere (including saving deleting etc)? Thanks. Edit: Sorry, forgot to say which folder it was. It is a folder in Program Files, where I save my PHP files for local testing. Sorry if Im a bit daft with all this, but I've upgraded straight from XP to 7, and having never used vista, I'm used to being allowed to have full control.

    Read the article

  • Automation Question using VMWare Workstation

    - by James K
    I'm running an experiment that requires me to create 100 instances of Windows XP w/SP3 and saving each VM instance off to a hard drive. I have to annotate the time that the VM load starts (starting my timer when I see the "Setup is preparing...") until the load ends when I see the final desktop after VM loads its drivers. I also have to annotate the host start and stop time. Is there any way this process can be automated? Each load runs me about 16:00 minutes and gets real tiresome after a time. BTW... Exact timing is not necessary, eyeballing as described above is sufficient for my testing needs.

    Read the article

  • Apache only logs PHP errors if LogLevel is set to debug

    - by Sudowned
    I'm developing a CodeIgniter application and for reasons that I do not fully understand errors have stopped being logged in the file specified in the Apache site conf. The page I'm testing is definitely generating a 500 error, but that is not reflected in the logs unless I set LogLevel debug. Setting LogLevel to error or warn results in no errors being logged. I don't think this is a CI issue because I've been developing this site for close to a week now and errors have been logged as expected until I picked the project up again this morning. Though for what it's worth, I've got: error_reporting(E_ALL); set in my index.php.

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB?

    - by SLim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour.

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Forward requests to IIS Application/Folder to Apache server on another port

    - by TheGwa
    I have found many questions and answers for ways of doing this using asapi filters or ARR and URL Rewrite, but none are clear and concise and I am sure many people have this issue. I am looking for a best practice step by step solution to the following scenario: I have a development server accessible externally via a specific port for testing. Eg. rnd.domain.com:8888. So there is one port in and out of this machine accessible to the world. On this server I have a number of Apache or other servers using specific ports such as 8080. IIS is bound to port 80 locally as well as 8888 to get external requests and works perfectly. I would like to use an application (folder) in IIS such as rnd.domain.com:8888/mapserver to map to the local apache server in both directions. The same solution must apply in production where the domain is mapped to port 80. eg. production.domain.com/mapserver maps to 8080 on production server

    Read the article

  • Cross-Forest Trust

    - by cdalley
    I am looking at testing a cross-domain trust we can have two domain controllers (with different forests and domain names) setup so we can move everyone onto the new domain. We do NOT run exchange on site and we do not have any links to O365 to AD currently. Onto the problem: I have setup two DCs in a Virtual Machine: They are on the same network 192.168.0.* The Windows 2003 server: Name: OLDSRVR "Clone" of our current Domain Controller IP: 192.168.0.1 Domain: internal.test.com The Windows 2012 server: Name: ADCTEST01 Brand new domain setup from scratch separate to internal.test.com Domain: internal.test2.com IP: 192.168.0.2 OLDSRVR can only see ADCTEST if it has dynamic IP set. If I set a static IP it cannot see it. If I try using the dynamic IP and try to join it gets to the end then complains "??The trust relationship between this workstation and the primary domain failed" Any ideas?

    Read the article

  • What is a preferred method for automatically configuring and setting up an Ubuntu instance?

    - by sutch
    I am tired of manually configuring instances of Ubuntu for testing web applications and for setting up workstations. I'm even more frustrated by the issues caused by inconsistent configurations. Is there a method (hopefully not too time consuming to learn and setup) that allows for automation of the setup and configuration of an Ubuntu server or workstation from an ISO. This is primarily for virtual machine instances, but it would be helpful to also create instances on hardware. I am specifically looking for a method to automate the installation of libraries (apt-get), configure services (such as Apache and MySQL), add 3rd party software (download, extract and build), and add libraries to scripting languages (for example, Ruby Gems or CPAN packages for Perl).

    Read the article

  • Is the hosts file ignored in windows if DNS Client service is running?

    - by Mnebuerquo
    I've seen a number of articles about how to edit the hosts file in Windows 7, but it's all about how to open notepad as administrator, not the actual behavior of the dns lookups afterward. I've read that the hosts file is ignored in XP SP2 if DNS Client service is running. I have tried this on my XP machine and it seems to be true. I can see how it is a security danger to have a hosts file that user programs could modify. If it could write to hosts, then any malware could spoof dns locally with minimal difficulty. I'm trying to use the hosts file for testing stuff on my local network without it going to the live site on the internet. At the same time I want to be able to use dns on the normal internet. Mostly though I just want to understand the rules on the newer windows systems. Thanks!

    Read the article

  • VMWare Workstation Dev Machine Disks: one fast or four echofriendly raid?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines - A dev machine running VS-2010, a build machine, a version-control machine, a web server for testing, a "personal" machine running office etc. I'll be connecting the computer to my stereo, so I'll also be running iTunes (possible on a dedicated VM) and I want the computer to be a silent one. I'll probably use an Antec P183 case. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. So, my question is as follows: In terms of heat, noise, reliability, warranty, price, capacity and performance, what would you suggest: A Raid10 4 disk array using eco-friendly disks such as the $94 1TB Western Digital Caviar Green, or one high performance disk such as the 2TB Western Digital Caviar Black at $280?

    Read the article

  • Solution to Manage and Monitor (Ubuntu) Machines

    - by Elmar Weber
    I'm looking for a tool like Canonical (system management and monitoring for Ubuntu) that is Open Source and free. The goal is to manage a dozen or so KVM machines for private testing purposes. I know of puppet and munin or RHQ as separate tools to manage and monitor, but I'd prefer something integrated. Any tips? Basic requirements would be: system package management and update (individual selection for each managed node) configuration of basic system services (Users, NFS, cron, ideally also Apache) monitoring (charting of system resources, disk, io, memory, etc) and alerting, ideally a default configuration with sensible values for alerts

    Read the article

  • Why am I getting 'undefined method' exceptions when executing 'run_list add', 'run_list remove' and 'rackspace server delete'?

    - by Peter Groves
    [Originally posted this to opscode forum, got no response] I’m testing out a free hosted chef-server account and multiple subcommands are failing with ‘Unexpected Errors’. Perhaps my version and the server version are incompatible? OS: Ubuntu 12.04LTS Local Chef: 10.12.0 (Installed through gem) Local Ruby: 1.8.7 Also, the workstation machine has been manually configured, but the client(s) I’ve been experimenting with are launched with the Rackspace plugin (using ‘knife rackspace server create…’) The problem commands seem to fail when talking to the host chef-server, however, before it ever tries to modify the client, so I don’t believe that’s where the problem exists. The virtual-servers that are launched by ‘knife rackspace server create’ are launched properly but then deleting them with knife fails. If I include a recipe in the run_list when I create the server, the recipe is properly added to the run_list. If I try to add it later or remove the one that there server was initialized with, those commands fail. Here is the output of a few relevant commands (with stacktraces): https://gist.github.com/7100ada3fd6690113697

    Read the article

  • Why has ESXi 5.0 not used the software RAID configuration on my test server?

    - by kafka
    I've got a test server which was running WS 2008 Enterprise on the bare metal. It was correctly using the software RAID 1 configuration (2x250 GB disks which appeared as one disk), setup on the Dell Poweredge T110 (which meets compatibility requirements) without requiring any extra setup from me. (As an aside I'm fairly sure it's software RAID, as we didn't spec a hardware RAID controller, if that's of any importance in this situation). I am now testing installing ESXi 5.0 on this server to run some VMs. I've successfully installed ESXi, and imported a VM fine, but it's showing 2 x 250 GB disks available as datastores. However they should be appearing as one volume. When I boot the server, there is a RAID configuration screen you can enter, and I'm guessing this is what I'll have to do at some stage, but now need to be very careful because there is one disk which contains data that I want to be mirrored on the other disk. What is the best thing to do in this situation?

    Read the article

  • Improve performance on Lync desktop sharing

    - by Trikks
    I'm using Lync 2010 server to handle some clients communication and screen sharing. The biggest issue is the performance with screen sharing, it is of rather high quality but the frame rate is very poor. I have been reading and searching a lot on the subject and 95% of all topics is about bandwidth, we have a 200/200 MBit Internet connection solely for this application. Also my test machines runs on an internal gigabit lan. The speeds between all boxes is hysterically fast. Next step was to ensure that there where some profiles for different bandwidths, so i registered some New-CsNetworkBandwidthPolicyProfile -Identity 50Mb_Link -Description "BW profile for 50Mb links" -AudioBWLimit 20000 -AudioBWSessionLimit 200 -VideoBWLimit 14000 -VideoBWSessionLimit 700 New-CsNetworkBandwidthPolicyProfile -Identity 100Mb_Link -Description "BW profile for 100Mb links" -AudioBWLimit 30000 -AudioBWSessionLimit 300 -VideoBWLimit 25000 -VideoBWSessionLimit 1500 Nothing fancy happend here either. Non of the test boxes have anything from Norton installed, they doesn't have any firewalls running (nor does the Lync server), all fences are down in this environment just for the testing. Is there any thing that I may have missed to improve the quality of this? Thanks

    Read the article

  • Jumbo Frames, ISCSI and ESXi

    - by vlannoob
    I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well. So now the questions: Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well? Whats the best way of testing that Jumbo Frames are indeed working as intended?

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >