Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 222/457 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Cannot resize OS X partition

    - by joshhunt
    I am trying to resize my existing Mac OS Extended partition on my Macbook to install Windows 7 (using steps similar to these), but when ever I go to apply the changes, I get this error: Partition failed Partition failed with the error: The partition cannot be resized. Try reducing the amount of change in the size of the partition. The total capacity of the hard drive in question is 260GB, with the entirety being taken up by the OS X boot partition. There is I am aiming to shrink that partition down to 60GB. How can I fix this problem? I have been reducing the amount of change by 10GB each attempt, but it still is not working. I assume the problem is that there is not a large amount of continuous space on the device. Is there some way to can do a manual defrag that would rectify this problem?

    Read the article

  • Error adding 4tb LUN (Raw Device Mapping) to ESX4 VM

    - by Tom Gardiner
    Hi guys, I'm trying to map an existing 4tb LUN from a Fibre Channel SAN, through to a VM in my ESX4 environment. It keeps telling me that the VMDK file size exceeds the the maximum size supported by the datastore. I've tried in Physical compatibility mode, and also both Virtual styles. I'm a little confused by this as we had the same LUN mapped through to another VM when we were running ESX3.5... I've also noticed that some of my other RAW mappings are generating extremely large VMDK files on the ESX servers. Does anyone know if this change in behaviour is intentional? And if so why? It doesn't seem to me that if the LUN is mapped directly to the VM that it's size should be relevant. We're running 4.0.0 build 236512, and 4.0.0 build 219382 and I've not had any success on either. Any insight or advice would be much appreciated! TG

    Read the article

  • Internet cafe software for linux

    - by pehrs
    I have gotten a request to roll out a total of 8 internet cafe's in a large network. Budget is non-existent as it will all be done for a non-profit. I was planing to use Ubuntu and live-cds to minimize the amount of management required, but I can't seem to find any suitable internet cafe system that is Ubuntu based. The requirements are pretty basic: It needs to keep track of logged in time and log out users when their time it up. No billing will be done, it will just be used to ensure people can share the computers fairly. It should be possible to force logout from a central system. Users will be unskilled, so it has to have a GUI. What (preferably free, considering the shoe-string budget) software would you suggest to manage this?

    Read the article

  • World Wide Publishing Service very slow to restart on IIS? Why?

    - by StacMan
    Every now and then, we look to restart our IIS server by restarting the "WWW Publishing Service". On most systems this would usually only take a minute or two, but this can often take up to 10 minutes to stop the server and restart. Does anyone know any way to work out what is taking so much time to stop the service? After reading up around the net I've learned this may be due to locked resources used by users and/or lower-level IIS cached items being recycled. But I cant seem to work out where I can validate if this is true on not. Also if anyone knows how to fix or speed this up, that would be excellent. We have a large codebase which contains over 280 aspx pages across the site. Our main domain contains about 100 aspx pages whilst the subdomains contain 15 or 20 each. Some specs: Code is written in C#; runs on .Net framework 2.0 Server Windows Web Server 2008 IIS7; DB running SQL Server 2008 Standard

    Read the article

  • LDAP search filter for Active Directory

    - by Francesco De Vittori
    Hello, I'm trying to look for users inside Active Directory through a LDAP query. Basically I'm searching for the user in this way: Search DN: dc=mydomain, dc=com Filter: (sAMAccountName=USER) where USER is replaced with the provided username. Now if USER is only the username without domain (for ex. "Joe") this works fine. However I receive them in the form (domain\username, for ex. "myDomain\Joe") and obviously the search fails. I see two ways: using a regex inside the Search Filter to discard the domain using a completely different search filter I'm no LDAP expert and I don't even know if it's possible to use regular expressions inside the search filters. Does anyone know if it's possible and how? P.S. I cannot pre-process the username to strip the domain. This cannot be changed, as it's all part of a large system.

    Read the article

  • Multiple Schedule/Task Views (devs, customer, etc) in MS Project (or other)

    - by ThePlatypus
    Is there a way to configure multiple views/filters in MS Project for the same set of tasks? I want to have a view that is configured for customer's only, for example. So, it would have large milestone tasks only, and hide all of the details that are there for the benefit of the team members. If MS Project doesn't do this, is there any project management software that does? A free/open source version would be preferable. Edit: Apparently, "filter" was the word I needed in my Google searches. I found how to mark a task as a Milestone and use the pre-made Milestone filter. However, that still doesn't hide non-Milestone tasks the way I want it to. I haven't figured out how to make a true custom filter.

    Read the article

  • Understanding this error: apr_socket_recv: Connection reset by peer (104)

    - by matthewsteiner
    So, if I do some benchmarking with apache benchmark (ab), and I use large numbers of requests. Then sometimes in the middle of a test I get this error. I don't even know what it means. So how can I fix it? Or is it just something that will happen if the server gets too many hits anyway? The problem is, if I run 10,000 hits, it'll all run perfectly. If I run it again, it'll get to 4000 and get the error: apr_socket_recv: Connection reset by peer (104) A little about my setup: I have nginx taking static requests and processing dynamic ones to apache. The file in question is served from cache by nginx, so I guess it's probably got to do with how nginx is handling the requests? Ideas?

    Read the article

  • Drag/drop mail folder from Outlook to Explorer export to .pst file?

    - by Alex
    While working on large-scale projects, I collect all mail conversations pertaining to the project in a separate folder within Outlook. When the project is completed, I want to archive the entire folder, for reference purposes. So my question is this: Does anyone know of a tool/plugin for Outlook 2007/2010 that allows you to drag/drop a mailfolder from within Outlook, to a Windows Explorer window, exporting the folder as a .pst file, preserving folder structure and timestamps on mails? I am doing this manually at the moment, but having a tool that would use simple drag/drop-functionality would greatly simplify my workflow for handling project-specific mail.

    Read the article

  • How to set WAN side buffers for WRT54GL running Tomato Firmware

    - by Vickash
    I've recently set up a machine running m0n0wall to try and fight buffer bloat and do some traffic shaping. It was more convenient (geographically speaking) to connect the cable modem directly to my old WRT54GL, then pass everything to the m0n0wall machine and have that do the real routing work. It took a bit of work, but it's working pretty well. I have a cable connection. I have m0n0wall set up to utilize only 90% of the specified speed of my subscription, which is fine. But I've noticed that at certain times of the day (possibly when my true bandwidth drops below that 90%), there's more latency if the connection is used heavily, and traffic shaping doesn't seem to work as well. I suspect this is caused by the buffers on the WRT54GL still being unnecessarily large. If the connection is working as expected, they won't get filled, but in times of reduced bandwidth they would. Does anyone know the command I need to execute, on the WRT54GL running Tomato Firmware, to reduce the buffers on the WAN interface to the minimum size possible?

    Read the article

  • Is there a registry key change that will by-pass the Windows Domain Join Welcome page?

    - by user1256194
    I'm scripting some Windows Server 2008 R2 builds using Power Shell. Some software needs to be installed after the server has joined the domain. Since I want to automate everything, I'm looking to by-pass the domain controllers Welcome page using a registry hack script. I work for a large company and the Active Directory people are unwilling to change group policy. I figure if it's a registry key I can script the change, install the software, replace the key and reboot as the final step. Is there a registry key change that will by-pass the Domain Join Welcome page?

    Read the article

  • Failed to Upload a File

    - by CrazyNick
    User 'X' is the site-collection owner. He tries to upload a 500kb file into a document library, got the error "The server has aborted your upload. The files selected may exceed the server's upload size limit. If you are transfering a large group of files, try uploading fewer at a time." however web-application owners are able to upload the file. what would be the issue, any thoughts? Upload size limit for a file – 5 MB Site Quota template set – 50 MB Used Site Quota – 10 MB

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How can I exceed the 60% Memory Limit of IIS7

    - by evilknot
    Pardon if this is more stackoverflow vs. serverfault. It seems to be on the border. We have an application that caches a large amount of product data for an e-commerce application using ASP.NET caching. This is a dictionary object with 65K elements, and our calculations put the object's size at ~10GB. Problem: The amount of memory the object consumes seems to be far in excess of our 10GB calculation. BIGGEST CONCERN: We can't seem to use over 60% of the 32GB in the server. What we've tried so far: In machine.config/system.web (sf doesn't allow the tags, pardon the formatting): processModel autoConfig="true" memoryLimit="80" In web.config/system.web/caching/cache (sf doesn't allow the tags, pardon the formatting): privateBytesLimit = "20000000000" (and 0, the default of course) percentagePhysicalMemoryUsedLimit = "90" Environment: Windows 2008R2 x64 32GB RAM IIS7 Nothing seems to allow us to exceed the 60% value. See attached screenshot of taskman.

    Read the article

  • NGINX + PHP FPM connect() failed (110: Connection timed out) while connecting to upstream

    - by Leonard Teo
    We're running a fairly large site using nginx and PHP-FPM and we're getting a lot of errors as the site load is quite high. We're getting "connect() failed (110: Connection timed out) while connecting to upstream"...upstream: "fastcgi://127.0.0.1:9000" Here's my config file for PHP-FPM. PHP-FPM: [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 100 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 100 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on What's the recommended config/number of servers/children for a high traffic site? We tried using Unix Sockets instead of TCP and got no noticeable improvements. Right now the errors are: connect() to unix:/var/run/php-fcgi.sock failed (11: Resource temporarily unavailable) while connecting to upstream...upstream: "fastcgi://unix:/var/run/php-fcgi.sock:"... Thanks, Leonard

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • How can you add two lines of text on a single line in Word 2010?

    - by deodorant
    Odd title, wasn't sure how to word it. Basically, I have two separate fonts I want to be on the same line, for resume purposes. My name is in a large font at the top, and I want my email and website address right-aligned directly beside it, one on top of the other. However, I want the email and website to combine to the same height as my name. Is this even possible with Word? Surely it is. Here is an awesome graphic of what I'm hoping for. Thanks! edit Seems new users can't post images. Link is here: http://i.stack.imgur.com/0gc3s.png

    Read the article

  • Looking for FTP server that allows user management from database

    - by hughesdan
    I'm planning a server application that will handle files uploaded via FTP. The application must parse text documents that it receives and write them to a database (most likely a document-oriented database like Mongo). And the application must also relay all large binary files it receives to Amazon S3 for storage and hosting. I'd like to manage all aspects of the FTP server programmatically. For example, when a user registers via a web page the application should be able to create the user account in the database and provision a directory on the server for receiving files. I'm using a Linux server but am otherwise open to considering any programming language or framework. I experimented with VSFTPD but didn't like the way the application relies on config files and the creation of users and directories via the command line. Can someone please recommend what server framework I should consider? I'm a little biased toward solutions that leverage Javascript/Nodejs or Python. However, I'm open to anything that can run on a Linux box.

    Read the article

  • Available Instance types for marketplace ami's

    - by Christian
    I based my autoscaling AMI's on the Turnkey Linux nginx AMI from the marketplace. I am now unable to select any of the newer generation instance types; for instance, my autoscaling uses m3.large type but I'd really like it to use the c3.xlarge type but every time I try to create a c3.xlarge instance with my AMI I get errors; The instance configuration for this AWS Marketplace product is not supported. My question is; Can I override this? I'm not using TKL support or any of their services, just the AMI. If I can't override it, do I have any other options besides creating a brand new AMI from scratch to use?

    Read the article

  • Symbolic links and 7zip

    - by Fire Lancer
    I'm trying to compress a folder into a .7z archive. This folder contains symbolic links to some other stuff outside the folder (both directories and files). Apparently 7zip just archives the link it's self which is not what I intended. Is there a way to tell 7zip that I want it to archive the stuff that it links too, not the link its self (so if there is a symlink name "foo" which points to "C:\stuff\foo" I want it to include the "C:\stuff\foo" stuff in the archive in place of foo, not a "0 byte" symlink... Failing that is there any reasonable around it, apart from adding the files and folders in question? Including the stuff through the symlink there's like 10 000 files, the large proportion of which are via symlinks so adding them all individually would take hours... I'm thinking mayby a program that creates a staging folder with the real files in it then passes that to 7zip, or just an archiver that does handle them better?

    Read the article

  • Hyper-V core NIC speeds and registry changes

    - by gary
    Good afternoon, On a Dell PE T610 I have Hyper-V core running, with 2 x Broadcom BCM5709C NetXtreme II GigE installed. I have noticed that copying large files 17GB for example, from a network physical server to the Hyper-V host local drive [not vm guest] is very slow in comparison to copying from Physical to Physical servers. Copying a 17GB file physical to Hyper-V host takes 30 minutes Copying a 17GB file physical to physical host takes 15 minutes Can someone tell me exactly what registry nodes I should disable on Hyper-V NICs to improve performance. So far I have gone to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class{4 D36E972-E325-11CE-BFC1-08002BE10318} and set the following to 0 on both physical NICs: *LSOv1IPv4 *LSOv2IPv6 *TCPUDPChecksumOffloadIPv4 *TCPUDPChecksumOffloadIPv6 Should I also disable *TCPConnectionOffloadIPv4 & *TCPConnectionOffloadIPv6? Many thanks in advance

    Read the article

  • Asterisk, IAXModem & Hylafax how-tos?

    - by Brian Postow
    I'm trying to set up Asterisk and IAXModem to send faxes via T38 (Yes, I know I'm swatting a fly with a Buick...) However, since I'm trying to do something so small with a product so large, I'm having trouble finding samples or how-tos that show me how to set this up. I've got all three installed, and I THINK I have my IAXModem config correct. I'm pretty sure that I have Hylafax correct (I've used it with T38Modem) so, I need to know which of the Asterisk samples I need to use, and how to use them. I think I want to use some combination of iax.conf, iaxprov.conf, sip.conf and sip_notify.conf. But I'm not sure where to put them, or what to change... I'm sure that the answer is RTFM, but I'm not sure WHICH M, or where in it to R... thanks.

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • How to dismiss the Windows 8.1 Upgrade banner?

    - by user81430
    As of today, my Windows 8 system is displaying a large banner covering about a quarter of the screen, demanding that I upgrade to Windows 8.1 for free. The banner helpfully informs me that that I can continue to use the computer while the upgrade is downloading. There's a button to go to the store for the upgrade, but no other choices, such as No Thanks, Maybe Later, or Absolutely Not. I can't find a way to dismiss the banner, and I can start programs but can't give them focus while the banner is present. So I'm dead in the water. How can I fix this? Bringing up Task Manager doesn't give me any options. My employer doesn't yet support Windows 8.1 (which is also an issue, but beyond my control) so I won't be able to connect to work if install the upgrade. So there is no way on God's green earth that I'm going to install this upgrade now.

    Read the article

  • Monitoring HP and Dell hardware in Gentoo.

    - by ewwhite
    I'm working in an environment that features a large number of Gentoo servers running on HP ProLiant and Dell PowerEdge equipment. While I've moved some of these systems to RedHat or CentOS for consistency, I'm still left with a good number of systems that will remain Gentoo. One of the issues I see with the Gentoo arrangement is lack of vendor-supported hardware monitoring. There doesn't seem to be an equivalent to the HP ProLiant Support Pack or Dell's agents for Gentoo. Is this simply something that you give up when using this distribution? How do you monitor hardware health and the like with Gentoo systems?

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >