Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 53/1833 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Large mailbox in Outlook 2007 takes ages to index

    - by Reado
    In our company each user has a single mailbox and all email they have ever sent/received is in that mailbox. We don't do archiving to PST and we thought that was the way forward. The problem we now have is if someone switches to another PC for the day and opens Outlook, it has to download all emails first to that PC (cached mode) but even then when they try to search for something, Outlook says items are still being indexed. One user has over 100,000 items to be indexed and it's been saying that for about a week! As a temporary workaround I have turned off instant searching which allows them to search for anything, but it takes time to filter through, and Outlook doesn't exactly indicate if it's still searching for something, so in most cases the user thinks the search isn't working when really it is and it's just taking time to populate the results. I need a solution that allows the mailbox to be indexed really quickly if the user has to login to another PC. Are we best using Online Mode instead of Cached Mode or is there another way around this? Thanks in advance.

    Read the article

  • Where are essential Windows files located?

    - by Dorothy
    I am using a Vista but I would like the answer for XP, Vista and Windows 7. I am writing a program where I want to count the Important or Essential files of the Windows PC. It looks like the Essential files would be located somewhere in C:/Windows and after some research I found that some Essential files are located in C:/Windows/winsxs. What and where are the Essential files for a Windows PC? Is there a folder or set of folders that contain the essential files? Are all the files in C:/Windows/winsxs Essential? Essential Definition: Absolutely necessary; extremely important

    Read the article

  • Any large USB sticks with integrated card readers?

    - by Al
    I have one of Kingston's DataTraveller Micro Reader USB sticks, a fantastic memory stick with an integrated micro SD and M2 card reader. However, I've gradually filled it to the brim and am looking for a larger stick. Unfortunately, Kingston don't make them any bigger than the 4GB one that I currently have and I was hoping to go to 16GB now that they've come down in price. Does anyone know if any manufacturers make something similar: a 16GB stick with a micro SD card reader integrated (I'm not bothered about the M2 reader).

    Read the article

  • All files trying to start in Notepad

    - by Jormal
    This question has been asked, but the solutions given were worthless to me because ALL files are trying to open in Notepad. I mistakenly associated all exe files with Notepad and everything is trying to open within Notepad now. That includes regedit, so suggesting I use regedit to doctor files does not work (cmd window will not open from Winkey + R - Run). It also includes any program I download to fix the issue. I also can not right click and choose Open With because that is not a given option when right clicking on a majority of files, at least none of the exe files I want to start. Yes, I tried it on the program files, not the shortcut. I also can not use System Restore because it, too, tries to open in Notepad. I've been banging my head uselessly on this for hours. Could someone help me out?

    Read the article

  • Windows Server 2003 R2 Standard: Locks MS Office files, but not Adobe .AI and .PSD files?

    - by Bruce Garlock
    We have some shares setup on a Windows 2003 R2 server, and the MS Office files people save behave properly: The first person to open the file gets read/write, and the second person to open the file while the first person still has the file open, gets a read-only version. This is not true for the graphics files, like Adobe Illustrator .AI files, and Photoshop .PSD files. Anyone who goes to open these files has full read/write, even if someone else is already working on the file! This has lead to numerous file corruption issues, as well as other lost work, since it always saves the last changes to the file. How do we get Windows to properly lock these files so when someone is working on a file, and someone else wants to open one, they get read-only access? Many thanks, Bruce

    Read the article

  • Increase text size in Ubuntu due to having large resolution/monitors

    - by Sridhar Ratnakumar
    I have 24" dual monitors with 1920x1080 resolution on both of them. Consequently the text appears so small. I use the following text-intensive applications frequently: Web browser (Google Chrome) IDE (Komodo) Terminal (Gnome Terminal) Email (Thunderbird) I can configure text size on IDE, Terminal and Email. But for Chrome, it is not a good idea to set proportional font size because often one wants to see the entire (not just proportional fonts) site to be zoomed. So I am asking: Is it possible to increase DPI in Ubuntu (much like on Windows) so as to increase the text size across all apps? OR Is it possible to set permanent 'zoom' in Google Chrome, using a third-party extension maybe?

    Read the article

  • Cancel table design change in SQL Server 2000

    - by Bryce Wagner
    In SQL Server Enterprise Manager and change one of the columns and save it, it will create a table with the new definition, and copy all the data to that new table, and then delete the old table when it's done. But if your table is large (let's say on the order of 100GB), it can take a long time to do this. Even worse, if you don't have sufficient disk space, it doesn't notice ahead of time, and it will spend a long time trying to copy the table, run out of space, and then decide to abort the process. We have other ways to copy the data in smaller chunks, but those require significantly more manual intervention, so it's usually easier to just let Enterprise Manager figure it out, as long as there's enough disk space. So for a long running "Design Table" save like this, is there any way to cancel once it's started? Or do you just have to wait for it to fail?

    Read the article

  • Finding out whether files are added, changed or deleted on a FTP server

    - by futureelite7
    I've recently been given the task to migrate about 200GB of data from one dedicated server to another. As this will take a week or more, I've been taking a snapshot of the current files on the FTP server using wget's mirror feature. However, since other users will probably be uploading / changing stuff in the meantime, the snapshot that I have made will not include the most recent changes. Since I only have access to FTP on this server, I'm planning to write a script that will recursively do a FTP stat on all files in the FTP folder, and compare the directory listing against the snapshot I have locally. If there are differences in the number of files, then I know files have been added or deleted. If the modification dates have been changed, then I know the files have been changed, and should redownload those files specifically. Am I missing anything in my approach, or are there any possible improvements to this approach?

    Read the article

  • Problems sending SMTP email to large systems such as Gmail

    - by Martel
    I maintain a mail server. Recently messages sent to valid recipients on gmail, yahoo, and now roadrunner email addresses are bounced with similar messages: Here's one from gmail: The message, sent by [email protected], can not be delivered to following recipient(s): *recipient*@gmail.com There was a fatal SMTP error. Fatal DNS error: exchanger alt4.gmail-smtp-in.l.google.com. does not exist Delivery History Follows: [DLVR 000020 19-12-12 14:21:21] Delivering item 5573 [DLVR 000020 19-12-12 14:21:21] Resolving MX records for domain gmail.com [DLVR 000020 19-12-12 14:21:21] Retrieved 5 MX records for domain gmail.com [DLVR 000020 19-12-12 14:21:21] Delivering mail to 1 recipient(s) at domain gmail.com using exchanger gmail-smtp-in.l.google.com. [DLVR 000020 19-12-12 14:21:33] Host gmail-smtp-in.l.google.com. does not appear to exist... [DLVR 000020 19-12-12 14:21:33] Will try next exchanger [DLVR 000020 19-12-12 14:21:33] Delivering mail to 1 recipient(s) at domain gmail.com using exchanger alt1.gmail-smtp-in.l.google.com. [DLVR 000020 19-12-12 14:21:45] Host alt1.gmail-smtp-in.l.google.com. does not appear to exist... [DLVR 000020 19-12-12 14:21:45] Will try next exchanger [DLVR 000020 19-12-12 14:21:45] Delivering mail to 1 recipient(s) at domain gmail.com using exchanger alt2.gmail-smtp-in.l.google.com. [DLVR 000020 19-12-12 14:21:57] Host alt2.gmail-smtp-in.l.google.com. does not appear to exist... [DLVR 000020 19-12-12 14:21:57] Will try next exchanger [DLVR 000020 19-12-12 14:21:57] Delivering mail to 1 recipient(s) at domain gmail.com using exchanger alt3.gmail-smtp-in.l.google.com. [DLVR 000020 19-12-12 14:22:09] Host alt3.gmail-smtp-in.l.google.com. does not appear to exist... [DLVR 000020 19-12-12 14:22:09] Will try next exchanger [DLVR 000020 19-12-12 14:22:09] Delivering mail to 1 recipient(s) at domain gmail.com using exchanger alt4.gmail-smtp-in.l.google.com. [DLVR 000020 19-12-12 14:22:21] Host alt4.gmail-smtp-in.l.google.com. does not appear to exist... [DLVR 000020 19-12-12 14:22:21] Fatal error - host alt4.gmail-smtp-in.l.google.com. does not exist. Will bounce... [DLVR 000020 19-12-12 14:22:21] Bouncing to sender using bounce address [email protected]... Sometimes these emails get through, other times not. I'm at a loss to explain it.

    Read the article

  • Drive configuration for 5 large databases

    - by Mr. Flibble
    I've got 5 databases, each 300GB, currently on a RAID 5 array consisting of 5 drives. All the databases are used heavily, at the same time, so drive speed is an issue. Would I see better performance if I got rid of the RAID 5 configuration and just put each database on a separate drive? The redundancy provided by RAID 5 is not necessary due to mirroring elsewhere. Will the server then be able to perform reads / writes to different databases drives in parallel? More so at least than when it's in RAID? This is all on Windows 2003 / SQL 2008.

    Read the article

  • Cooling a large laptop

    - by sazabi02
    I've had my first laptop early this year and I haven't the slightest idea how to make it cooler. No real problems when I don't use graphics intensive games but when I play games like Dragon Age the temps rise up from 55 to 85. I'm concerned as a friend tells me that HP laptops aren't reputed to last long when it comes to heat. BTW, I've already bought a cooling pad with 3 fans and it didn't do much that elevating it and pointing an electric fan at it didn't do before. Additionally, this is a 17 inch HP dv7-3085dx entertainment notebook that i'm using.

    Read the article

  • Increase text size in Ubuntu 10.04 due to having large resolution/monitors

    - by Sridhar Ratnakumar
    I have 24" dual monitors with 1920x1080 resolution on both of them. Consequently the text appears so small. I use the following text-intensive applications frequently: Web browser (Google Chrome) IDE (Komodo) Terminal (Gnome Terminal) Email (Thunderbird) I can configure text size on IDE, Terminal and Email. But for Chrome, it is not a good idea to set proportional font size because often one wants to see the entire (not just proportional fonts) site to be zoomed. So I am asking: Is it possible to increase DPI in Ubuntu (much like on Windows) so as to increase the text size across all apps? OR Is it possible to set permanent 'zoom' in Google Chrome, using a third-party extension maybe? I am using Ubuntu 10.04 (Lucid Lynx)

    Read the article

  • Powershell overruling Perl binmode?

    - by hippietrail
    I have a Perl script which creates a binary file while scanning a very large text file. It outputs to STDOUT which I redirect in the commandline to a file. To optimize it I'm making changes then seeing how low it takes to run. On Linux for this I use the "time" command. On Windows the best way to time a program seemed to be to PowerShell's "measure-command". This seemed to work fine but I noticed the generated files were larger. On examination I found that the files generated from within PowerShell begin with a BOM and contain CRLF pairs! My Perl script has a "binmode STDOUT" directive and does work correctly in a normal dosbox. Is this a bug or misfeature in PowerShell or measure-command? Has it affected others creating binary files by means other than Perl? Googling hasn't turned anything up so far. I'm using Perl 5.12, PowerShell v1.0 and Windows XP.

    Read the article

  • How to edit a really large file in Windows [closed]

    - by Ankur
    Possible Duplicate: Text Editor for very big file - Windows NOt a programming question I know but related to a program I am writing, and probably a problem only likely to be encountered by programmers. I have a really big text file which I need to edit - just need to delete the first line. None of the standard windows programs can handle the 200MB+ file What is the best way to edit it?

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    Hi, I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • How to convert eps file to a large jpeg image

    - by Anand
    Hello, I am using Linux. I want to convert an eps file to jpeg file. I find that I can use "convert" command. However, the resulting image looks very small. I want to enlarge the jpeg file by -resize option. It seems not to work. The resulting image is a pure black one. Do anyone has the same problem? Here are more details: 1: if I use convert -scale 1000x1000 your.eps your.jpg the resulting image looks like a low quality image. The eps vector image is not scaled properly. 2: if I use convert -geometry 300% your.eps your.jpg I get a pure black image. Here is my phf file: 2shared.com/document/RXl2Be-g/askquestions.html and my eps file: 2shared.com/file/qrmwKegj/askquestions.html Thank you very much for your help!

    Read the article

  • Using LDAP Attributes to improve performance for large directories

    - by Vineet Bhatia
    We have a LDAP directory with more than 50,000 users in it. LDAP Vendor suggests maximum limit of 40,000 users per LDAP group. We have number of inactive users and those are being purged but what if we don't get below the 40,000 users? Would switching to using multivalued attribute at user record level instead of using LDAP groups yield better performance during authentication, adding new users, etc? I know most server software (portal, application servers, etc) use LDAP groups. But, we have a standardized web service interface for access control instead of relying on server software to map LDAP groups to security roles. Each application uses this common "access control web service". Security roles are used within application to build fine-grained ACL used within each enterprise application.

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • Printing Large PDF from Outlook 2003

    - by mrach
    Whenever I try to print an attached oversized PFF sheet (larger then letter sized) from Outlook, the print is cut off. How can I configure Outlook to automatically fit the PDF to page sized with out having to open it up in Adobe Reader?

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • Keeping packages on a large number of openSUSE servers updated

    - by Kamil Kisiel
    Question for anyone out there managing a network of openSUSE machines. How do you keep track of and apply updates? I know about YaST Online Update (YOU) but it seems more geared towards keeping a single machine up to date. It doesn't seem to scale well to a larger number of machines. How do you keep your machines updated? Our network is fairly heterogenous in terms of package installation as the servers are mostly infrastructure machines with varying roles. I know that SUSE Linux Enterprise has tools to manage updates network-wide, but updating to that is currently not an option for budget reasons.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >