Search Results

Search found 6028 results on 242 pages for 'total commander'.

Page 121/242 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Exchange Failover Solution

    - by Dan
    I've been given the task to come up with an exchange solution that will support 200 users total throughout 4 states. 1GB per user. It needs to have a failover solution,The failover must reside in another location. There is an mpls that connects the locations. I am hoping to get recomendations on hardware, software setups. I recently worked with some big name reps and they steered me in the wrong direction and now I'm a few days away from my proposal date with bogus quotes and scrambling for a solution. I used to manage a standalone 2003 exchange server for years and am at a loss now with figuring out a clustering/failover...Any help would be greatly appreciated. thank you

    Read the article

  • Windows server backup fails at 40%

    - by Abraham Borbujo
    I´m configuring windows server backup as full system backup. It starts fine, but when it´s making system drive (c:) backup it stops at 40% every time i try. It only backups 7.28 GB of the total 18.19 GB. I tried changing destination drive and also checking c: filesystem in order to find any problem, but it seems to be ok and the problem is still the same. I got a message telling that the backup is completed with warnings. The warning details says that it didn´t complete backup because of input/output error in source or destination. Thanks for your help.

    Read the article

  • Can the JVM(Oracle) run into an OutOfMemory error if the heap size is below the max?

    - by user439407
    I am running a Tomcat site(with an NGinx front end) that seems to be randomly running out of memory even though the max heap size is pretty large. My question is is it possible for the JVM to get an OutOfMemory error even if the heap size is significantly less than -Xmx? For instance, here is a snapshot I took just 15 seconds before an OutOfMemory error: Tue Dec 18 23:13:28 JST 2012 Free memory: 162.31 MB Total memory: 727.75 MB Max memory: 3808.00 MB I guess theoretically it's possible that my code generated 3 gigs worth of objects in 15 seconds, but I highly doubt it. It seems like the JVM was unable to grow the heap even though it theoretically had room....Is it possible that other processes started using memory to the point that the JVM could not grow? I am running 64-bit Oracle Hotspot on a 64 bit vm running CentOS 5 with 6 gigs of ram.

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • Windows 7 x64. Some 32 bit applications refuse to install.

    - by user250712
    I have been having problems lately when trying to install older games onto my PC. It is only with 32 bit applications. A few games that will not install are: Drakan: Order Of The Flame TA Kingdoms (Total Annihilation installed fine) Baldur's Gate. In Baldur's Gate, when I use autorun.exe and choose install, the autorun closes and the computer loads for a second (as it should) then nothing pops up. Ten minutes later still nothing, so I try again, still nothing. So next I use Setup.exe. Still nothing. I run it in every compatibility mode, and as Administrator in every mode, still nothing. Then I open Task Manager, and there are about 80 setup.exe processes running, all of them doing nothing and taking up next to no resources.

    Read the article

  • Windows 7 file explorer preview window and password protected word documents

    - by Carbonara
    When using the Windows 7 Explorer with the preview pane open you get a little preview of a file when you click on it. This includes Word, Excel spreadsheets, etc. My problem is if the Word document is password protected. Clicking on it in Explorer automatically asks for the password to display its preview. It does this if you single or double clicking on it. You then get an empty Word instance running (which allows it to display the preview) and another instance of Word with your actual file and you're asked for the password twice in total. This is annoying and untidy. Is there a way of stopping the preview pane from wanting to display password protected documents and thus not asking for the password to display a preview?

    Read the article

  • Dry length of buoy in OrcaFlex

    - by KAE
    I use a software package called OrcaFlex to model the behavior of a buoy in ocean waves. I would like to share OrcaFlex questions in this forum - hope some users are out there! Here is a starter question: For a 6D buoy, I extracted the 'Dry Length' after the simulation completed. The value of the Dry Length sometimes slightly exceeds the actual height of the buoy, even though this would not seem to be possible given the formula from the manual, Dry Length = (cylinder length) × (cylinder volume above surface) / (cylinder total volume). Any insights?

    Read the article

  • Exchange 2010 and DAG - all roles on both servers?

    - by Keith
    We just recently migrated to an Exchange 2010 server. Currently all of the roles and mailboxes are installed on 1 server (we are a small company with less than 100 users). I am wanting to use DAG for replication however it seems most set ups for DAG requires at least 3 or 4 total servers. Is there anyway to make this work with just two servers and both of these servers would have all the roles and mailboxes? Maybe there is a better way to do this than DAG? I'm open for suggestions. The goal here is to have some sort of replicated server so that if there is an issue with our primary Exchange server, another one can be brought up within an hour or so with all current information (not a backup). It doesn't necessarily have to be instantaneous.

    Read the article

  • Format Excel cells to display as '##:##:##'

    - by David Gard
    I'm trying to format cells in Excel so that they display the total duration of phone calls as hh:mm:ss, but Excel is giving me errors. Sometimes durations are only mm:ss (49:10), or even just ss (35), and I need them by default to change to 00:49:10 and 00:00:35 respectivly. However, when I select 'Custom' on the 'Number' tab when formatting the cells and enter either 00:00:00 or ##:##:##, Excel tells me - Microsoft Office Excel cannot use the number format you typed. Also, hh:mm:ss will not work for me, as I'm dealing in durations, not times. Is anyone able to tell me how do format this? Thanks.

    Read the article

  • How to Increase Memory Allocated to IIS .NET Application?

    - by Mark Hansen
    We are using Windows 2008 R2 and IIS 7 running on Amazon EC2. IIS is running a single .NET application written in C#. We are having performance issues and I want to give the application more memory, but I cannot figure out how to do it. How do I control the amount of memory that the CLR gets? I'm a total newbie with IIS, .NET and the CLR. If I were working with Java, I would just use the -Xmx flag to increase the memory available to the JVM (e.g., -Xmx3000m for 3GB). But, I cannot seem to figure out how to do this in the Windows world.

    Read the article

  • MySQL: how to convert many MyISAM tables to InnoDB in a production database?

    - by Continuation
    We have a production database that is made up entirely of MyISAM tables. We are considering converting them to InnoDB to gain better concurrency & reliability. Can I just alter the myISAM tables to InnoDB without shutting down MySQL? What are the recommend procedures here? How long will such a conversion take? All the tables have a total size of about 700MB There are quite a large number of tables. Is there any way to apply ALTER TABLE to all the MyISAM tables at once instead of doing it one by one? Any pitfalls I need to be aware of? Thank you

    Read the article

  • Apache stops serving requests when connections increase

    - by Gunjan
    The values for MaxClients, ServerLimit etc parameters are quite high (4000). Available RAM on the server is high too (~8G). Load average remains below 1 on a 24 core CPU. But when the number of visitors on the website increase apache just stops serving requests. The apache error log is blank and access log shows no more requests coming in. Restarting apache makes it work again until the number of requests increases again. Any ideas where to start looking? UPDATE Getting the below errors in apache error log on running it with LogLevel Debug [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 32 children, there are 479 idle, and 1027 total children

    Read the article

  • VPS showing low disk space despite there is nothing major on it

    - by SheoNarayyan
    Hello experts, On my VPS server I was trying to see the used disk space and when I open My Computer it shows 17.9 GB free out of 39.8 GB it means that 21.9 GB space is used. However, when I select all files and folders from C: and try to see the total size, it just count approximately 11 GB. The difference is around 10 GB. Where is this 10 GB going if I have not stored anything else here? I asked above question from my VPS provider and he responded below Check hidden files/system files/etc. This is default windows OS and its utilization and not specific to setup. If you want specifics of usage, you can go ahead and get in touch with Microsoft support team and they'll provide you with exact specification of the same. I am sure that Windows OS must not be taking up 10 GB space for hidden files and folders. My VPS has Windows Server 2008 R2 installed. Can anyone help me in this on who is right?

    Read the article

  • What is the best way to configure the number of workers in Apache?

    - by rbm
    My site receives a lot of traffic for 2 hours during the day (2000 hits per minute). The rest of the day receives less traffic(500e hits per minute). I have been experimenting with the MaxClients and MaxSpareServers values but I still get some downtime during peek hours. How can I calculate the best values for my configuration based on the amount of ram that I have ? Each process is like 36-40 M of Memory total used free shared buffers cached Mem: 3096 793 2302 0 0 0 -/+ buffers/cache: 793 2302 Swap: 0 0 0 Values that I am using now <IfModule prefork.c> StartServers 10 MinSpareServers 22 MaxSpareServers 60 ServerLimit 90 MaxClients 90 MaxRequestsPerChild 400 </IfModule>

    Read the article

  • AWS EC2: how to compute the cost

    - by EsseTi
    i'm new to AWS, i'm using the free right not and it's terrific. Now, in 1yr the free expires. i went to the website http://aws.amazon.com/ec2/pricing/ where the pricing is but i didn't really get how to compute it. The price are in $ per Hours but i don't think that this means, if i need to have my application running 24h/365d i've to multiplay it for 8760, or do i have? because they write about usage, but how do i compute this value? if i've a website where people in total spend smt like 10 minutes a month and 1 where people spend 750hour a months i pay the same? i can't believe that is the same price. PS:if i've a scheduled task, does it affect the usage?

    Read the article

  • Authlogic Help! Registering a new user when currently logged-in as a user not working.

    - by looloobs
    Hi Just as a disclaimer I am new to rails and programming in general so apologize for misunderstanding something obvious. I have Authlogic with activation up and running. So for my site I would like my users who are logged in to be able to register other users. The new user would pick their login and password through the activation email, but the existing user needs to put them in by email, position and a couple other attributes. I want that to be done by the existing user. The problem I am running into, if I am logged in and then try and create a new user it just tries to update the existing user and doesn't create a new one. I am not sure if there is some way to fix this by having another session start??? If that is even right/possible I wouldn't know how to go about implementing it. I realize without knowing fully about my application it may be difficult to answer this, but does this even sound like the right way to go about this? Am I missing something here? Users Controller: class UsersController < ApplicationController before_filter :require_no_user, :only => [:new, :create] before_filter :require_user, :only => [:show, :edit, :update] def new @user = User.new end def create @user = User.new if @user.signup!(params) @user.deliver_activation_instructions! flash[:notice] = "Your account has been created. Please check your e-mail for your account activation instructions!" redirect_to profile_url else render :action => :new end end def show @user = @current_user end def edit @user = @current_user end def update @user = @current_user # makes our views "cleaner" and more consistent if @user.update_attributes(params[:user]) flash[:notice] = "Account updated!" redirect_to profile_url else render :action => :edit end end end My User_Session Controller: class UserSessionsController < ApplicationController before_filter :require_no_user, :only => [:new, :create] before_filter :require_user, :only => :destroy def new @user_session = UserSession.new end def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" if @user_session.user.position == 'Battalion Commander' : redirect_to battalion_path(@user_session.user.battalion_id) else end else render :action => :new end end def destroy current_user_session.destroy flash[:notice] = "Logout successful!" redirect_back_or_default new_user_session_url end end

    Read the article

  • Excel: link value once, then prevent change

    - by user1832164
    For some budgeting spreadsheets I'm working on, I'd like to link each month to a value (in this case, a percentage). However, if the original percentage is changed I ONLY want to change values going forward. For example, let's say item one is budgeted at 10%, so each month reflects 10% of the total (which changes every month). If I decide to change that to 12% going forward, I don't want the previously linked values to also change from 10 to 12% (and throw off lots of other numbers). My thought was to have a check box where if I placed an x, the values would be locked to the value at the time of placing the x and no longer change. Is this possible? I know there are options for doing a paste special, but I'm creating this spreadsheet for someone who is not very Excel savvy, so I want it to be as seamless as possible. Many thanks.

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • How do I optimize a high traffic Wordpress website?

    - by mha
    Hello, I am running a wordpress based site which is now hosted on (mt) under DV-Extreme package 2GB+256MB addon RAM. It a muti author site where people are engaged in writing posts, comments, updating status etc. According to Google Analytics this month traffic Visitor = 45,764 Pageview = 1,051,186 Visit = 141,447 I have cdn my site, compress the css, used w3 Total cache plugin to optimize my site. Since last month I am getting several down notice from Pingdom. Right now I am facing more down alert than before. And have to restart my site several time to up again. Is my hosting resource is not enough? Do I need more resource? or what could be the solution? Helpful suggestion will be appreciated. Thanks.

    Read the article

  • How to take backup mirror copies of C: drive?

    - by metal gear solid
    I've installed everything on my C: Drive . Whatever i need Windows 7, updated drivers and utilities and software etc i need. I now i want to take a backup mirror of everything in a DVD or i can keep backup in another USB HDD. so in case if i face any windows or hard-drive failure in future then i can restore everything as it is as all are today. I don't want to reinstall everything again Windows, Drivers all utilities and all needed soft-wares. My C: Drive's total capacity is 108 GB but data on c: drive is only 12 GB. What Should i do ? What is the best solution for me? I need free solution.

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. Is this at all possible?

    Read the article

  • Migrate 3 terabytes of files to a new server windows 2003

    - by smackaysmith
    We have a new file server to handle the obscene amount of files generated by the company (PDFs, XLS, DOCs and JPGs). Files being moved to the new server total about 3tb. The problem is we can't take the company down for days to move the files. The other problem is the applications creating all these files have to reference previous files, so we can't simply point them to the new server. Also, there isn't an option to have the applications create files on the new server, but reference the old server for existing files. The servers are x64 win2003 r2. Both servers are on the same subnet. DFS doesn't work. Is there an application that can handle this amount of data to copy the files over, throttle bandwidth, and do a 'merge'? By merge I mean constantly copying over newly created files until the two servers are synched.

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • What's faster, cp -R or unpacking tar.gz files?

    - by Buttle Butkus
    I have some tar.gz files that total many gigabytes on a CentOS system. Most of the tar.gz files are actually pretty small, but the ones with images are large. One is 7.7G, another is about 4G, and a couple around 1G. I have unpacked the files once already and now I want a second copy of all those files. I assumed that copying the unpacked files would be faster than re-unpacking them. But I started running cp -R about 10 minutes ago and so far less than 500M is copied. I feel certain that the unpacking process was faster. Am I right? And if so, why? It doesn't seem to make sense that unpacking would be faster than simply duplicating existing structures.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >