Search Results

Search found 17727 results on 710 pages for 'large apps'.

Page 49/710 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • compose-key mappings differ between gtk and qt apps

    - by intuited
    I'm noticing that there is an inconsistency in the output of one of the compose-key combos. When I type ( [Compose] . . ) under Chrome, gedit, gnome-terminal, or roxterm I get the character '?'. This is a small raised dot: $ echo -n '?' | xxd 0000000: cb99 .. When I type the same combo under konsole, yakuake, or kate, I get the character '…'. This is an ellipsis: $ echo -n '…' | xxd 0000000: e280 a6 ... This is not a font issue: if I copy-paste the characters from an app using one toolkit to an app using the other, its appearance is maintained. I use a few other combos pretty regularly and they seem to work consistently across toolkits. I think this is a pretty recent phenomenon. I upgraded from Ubuntu 8.10 to 9.10 fairly recently so this might be related. I'm not sure if this will reoccur if I restart X, and I'd rather not find out. Can someone explain how this is possible, and what I can do to resolve it? I'd like to have the ellipsis appear in all apps when that combo is entered.

    Read the article

  • Sharing large (multi-Gb) files with clients

    - by Tim Long
    I wasn't sure if this was the best place for this question, but I think it is squarely in the realm of the IT admin so that's the reason I put it here. We need to share large files (several Gigabytes) with external clients. We need a simple way of reliably and automatically publishing these files so that clients can then download them. Our organization has Windows desktops and a Windows SBS 2011 server. Sharing from our server is probably suboptimal from the client's perspective, because of the low upstream bandwidth of typical ADSL (around 1 Mbps) - it would take all day (9 hours for a 4Gb file) for the client to download the file. Uploading to a 3rd party sever is good for the client but painful for us, because we then have to deal with a multi-hour upload. Uploading to a third-part server would be less problematic if it could be made reliable and automatic, e.g. something like a Groove/SharePoint Workspace, simply drop the file in and wait for it to synchronize - but Groove has a 2Gb limit which is not big enough. So ideally I'd like a service with the following attributes: Must work for files of at least 5Gb, preferably 10Gb Once the transfer is started, it must be reliable (i.e. not sensitive to disconnections and service outages) and completely automatic Ideally, the sender would get a notification when the transfer completes. Has to work with Windows based systems. Any suggestions?

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running the default Ubuntu VNC server. I have another computer that has a resolution of about 1200x1024 that I use to VNC into the server using the default Ubuntu VNC viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how Windows Media Player or VLC scales down the window (and does some interpolation of pixels). Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there.

    Read the article

  • Exim and receiving email with large recipient lists

    - by AceJordin
    I have Exim4 running on Debian configured to receive mail on multiple domains. Exim is set to forward all email that is received to one of the domains to another box. This box is configured with a catchall mailbox that everything goes in. My issue is that when an email is sent to the domain, which contains a large amount of addresses (all to the same domain, but different users), Exim will receive the single email over multiple connections. This means that the catchall mailbox receives multiple copies of the single email all containing the full recipient list. For example, I was able to reproduce it by sending an email from my gmail account that contained 500 recipients (eg [email protected]; [email protected]; [email protected]; etc. for a total of 500). Exim received the message as 20 messages (25 recipients per; appears to be a gmail server setting). So the catchall mailbox received 20 messages, each containing all 500 addresses. I'm pretty sure I understand why this is happening but is there any way I can configure Exim to only receive it once, or to combine it into one? Is there anything that can be done on my end, or am I at the mercy of the sending email server? This is causing havoc with a process that polls the catchall mailbox and parses each recipient in each email.

    Read the article

  • Two large, linked Excel files take 30 minutes to save, except in VMWare environment

    - by Gerald L
    I support some tax consultants who love to use Excel when they should probably be using Access. Anyway, they have created two Excel files, A and B. File B has cells linked to file A. File A is 27 MB and file B is 16 MB. One worksheet has roughly 1 million rows and there is another worksheet doing a whole bunch of SUMIF on the 1 million rows. Not the best idea, but whatever. Both Excel files open and recalculate within a reasonable amount of time (1-2 minutes). For a files that large, this is acceptable. Here is the problem: Once you change a cell, and save the file B, it takes a solid 30 minutes to save the file, and the processors are going full speed. I've tried this on 6 different machines, all running Windows XP SP3 with Office 2007 SP2 and all patches. The specs vary from one machine with 512 MB or RAM to a machine with 4 GB of RAM and quad processors. Same result every time. Here is the clincher: If I do this same save operation on a VMWare virtual machine, the file gets saved in 1 minute. I've tried this with my ESX servers at the office, my Mac Fusion at home, and VMWare workstation at the office. It does not matter how much RAM the virtual machine has... it saves in about 1 minute every time. Does anybody have any idea why this is happening and how to fix?

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • Better approach to archiving large amounts of original video footage using optical media (DVD/Blu-ra

    - by Rob
    This question is to share my experience as well as ask for suggestions for better methods. Along with 2 friends, I completed the making of a short documentary film in 2006. Clip is at: http://www.youtube.com/mediamotioninvision The film was edited in Adobe Premiere Pro 1.5 on Windows XP. More details and screenshot here: http://www.flickr.com/photos/smilingrobbie/1350235514/ ( note this is not intended to be a plug, we've moved on from this initial learning curve project ;) ) The film is in 4:3 standard definition 720x576 PAL format. As well as retaining the final 30minute film, I wanted to keep all original files that assembled together to make the film. The footage was 83.5Gb So I archived them to over 20 4.7Gb DVD recordables in the original .avi format (i.e. data DVD-ROM format, NOT DVD-Video Mpeg2) Some .avi DV video files were larger than 4.7Gb so I used 7-zip to split them ( here is a guide as to how to do that: http://www.linglom.com/2008/10/12/how-to-split-a-large-file-using-7-zip/ ) To recombine them, a dos shell command like this would do that: copy /b file.avi.* file.avi would do the job, where .* is a wild card to include all the split parts e.g. 001, 002...00n assuming they are all in the same directory path folder. file.avi is the recombined file identical to the original. Later on, I bought a LG BE06 LU10 USB 2.0 Super-multi Blu-ray burner and archived the footage to 2 (two) x 50Gb BD-R DL discs. Again in the original format, written as files to a BD-R in the BD-R BD-ROM UDF format readable by PC/Mac etc, NOT Blu-ray video/film format. This seems to be a good solution for me, because: the archive is in a robust, reasonably permanent, non-volatile medium, i.e. DVD recordable / Blu-ray (debates about stability of optical media organic chemical dye compounds/substrates aside) the format of the archive is accessible by open source tools or just plain Windows Explorer and it's not in a proprietary format I just thought I'd ask folks for their experience on better methods, if such exist.

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Clipboard bug in Wordpad in Windows 7 (accidentally pasting large file into application)

    - by frenchglen
    In Win7, I use Wordpad, and I really like it. For my needs it's lean and fast, yet has the formatting functionalities I'm after when working on my TXT/RTF files on a daily basis. I don't intend to change text editors. There's a really bad bug which has ALWAYS plagued me. If you have a large file contained in the clipboard, like a 238MB FLAC file, and you accidentally paste it into Wordpad for whatever reason - it hangs the application for a VERY long time (like 2 hours, it depends on how big the file is, because it tries to 'handle' it). You either have to close the application and lose any unsaved changes, or go do something else until the item has finished pasting into Wordpad (it actually eventually drops the file's icon in wordpad just like how it appears in Windows Explorer). It's a Windows bug, a Wordpad bug. Is there some solution for this? Or is the problem fixed in Windows 8 (if anyone can tell me)? .....I'm not going to try out Win8 myself, merely to answer this question - that's what I'm asking it on SuperUSer for! I'm really hoping it's one of those little-yet-big things that they've fixed in Win8 (like removing the 255-character file path limit in Explorer, which is awesome). Thank you for your help, if you have Win8 handy and can test this. :)

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • running chkdsk /F on a large mounted NTFS image file gets BSOD (Windows Vista)

    - by Citizentools
    Using ddrescue, I've created ISO files from the C: and D: drives on my Windows XP laptop's harddisk (after the laptop stopped booting and chkdsk etc. wouldn't fix it). I was able to mount the 60 GB D.iso file use OSFmount, and successfully recreated the D: drive on another laptop. The C.iso image is more problematic. ddrescue left about 3mb unrecovered of 85 GB total, after multiple passes (no big worries about this) and I'm able to mount it with OSFmount on a Windows Vista laptop. However, when I run chkdsk /F /V on the mounted drive (which was mounted as H:), I consistently get a blue screen (BSOD). CHKDSK makes it through the first three passes, including index fixing and security descriptor fixes, without errors, but triggers a BSOD when it attempts to fix the volume records or bitmap If I attempt to fix the drive by clicking on Properties-Tools-Error checking-Check Now-Automatically fix file system errors, I get an alert box reading "WIndows was unable to complete the disk check." I'd try a tool other than OSFMount, but it's the only thing I've found so far that will mount large ISO files, and it has worked for me up to now in this process. [Update 2011-11-13 18:41 EST] Just ran the same process using the original Windows XP laptop, with a different internal drive, and chkdsk worked like a champ. So the question is still interesting, but decidedly less urgent.

    Read the article

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • Error 0x80073cf9 when installing or updating apps from windows store

    - by cmorse
    On my Windows 8 desktop I keep getting error 0x80073cf9 when I try to install or update an app from the windows store. In the installings apps pane it just says "This app wasn't installed -- view details" and when I select that it says "Something happened and this app couldn't be installed. Please try again. Error code: 0x80073cf9" I am using the built-in windows firewall and antivirus. And my laptop is able to install updates when it is on the same network. This is what winstore.log shows when I try to update the maps app: 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] *********************************************************************** 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Process name: C:\Windows\system32\taskhost.exe 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] User name: Desktop\User 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Computer name: desktop 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Windows build: 9200.16424.amd64fre.win8_gdr.120926-1855 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Client version: 615 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011428] CWSTileUpdateHandler::Worker: Broker is handling badge updates. 2012-10-18 15:31:47.554, _Info_ WS [00002572:00008200] CProgressDispatcher::OnProgress: AppId = 97a2179c-38be-45a3-933e-0d2dbf14a142, PFN = Microsoft.BingMaps_8wekyb3d8bbwe, InstallPhase = 1, PhasePercent = 0, TotalPercent = 0 2012-10-18 15:31:47.558, _Warning_ WS [00002572:00008200] CDownloadProgress::IDownloadCompletedCallback::Invoke: Download complete result 0x80073cf9 for Microsoft.BingMaps_8wekyb3d8bbwe 2012-10-18 15:31:47.559, _Error_ WS [00002572:00008200] CActionItem::_DoDownload: Download failed for 97a2179c-38be-45a3-933e-0d2dbf14a142, hr=0x80073cf9 2012-10-18 15:31:47.560, _Info_ WS [00002572:00008200] CActionItem::_DoDownload: Notifying progress handlers of download failure for 97a2179c-38be-45a3-933e-0d2dbf14a142, hr=0x80073cf9 2012-10-18 15:31:47.560, _Error_ WS [00002572:00008200] CProgressDispatcher::OnError: PFN = Microsoft.BingMaps_8wekyb3d8bbwe, InstallPhase = 1, hrError = 0x80073cf9

    Read the article

  • Server 2008 Print Redirection is failing but only on 16Bit apps

    - by ian
    the main programmer for SoEasyAccounting and we are installing to Server 2008 Standard service pack 1. We install to 2003 with no problems. Important to understand that the print failure only happens in certain circumstances: Note: We use a standard Windows printer selection box to choose the printer Terms used Superbase = a program language that uses ntvdm.exe (Windows process hosting 16 bit apps) Local Printer = printing to a driver loaded onto the Server 2008 Redirect Printer = printing to a automatically established remote printer through an RDP connection Printing Scenarios Server 2008 - 1: Print from notepad to a Redirected Printer = works Server 2008 - 2: Print from Superbase to a Local Printer = works Server 2008 - 3: Print from Superbase to a Redirected Printer = fail Server 2003 - 4: Print from Superbase to a Redirected Printer = works Results The print causes a message in the drivers print queue of Local Downlevel Document, no print though and Superbase recognises that the "Print command failed". Eventvwr has no related issues to the print fail Any help greatly appreciated. So far 2 days spent trying to resolve and here goes my weekend :( unless someone has an idea :) Things I have Tried i. Switching on/off Easy print ii. Loading copy of redirected driver on server

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • ext3: maximum recommended partition size / handling large partitions

    - by Hansi
    Hi! I would like to do an encrypted install of Ubuntu on a 2 Terabyte drive (i.e., using LUKS/DMcrypt). In order to not have to type in passwords too often, the partitioning scheme will be 50 GB for / and about 1 TB for /home (and the rest for Windows 7), just for clarity. Even though by now LVM is regarded as being stable, I don't want to bother having more room for errors by introducing unnecessary layers of complexity. For both Ubuntu partitions I want encrypted ext3 with the default blocksize of ext3 (4k?). Thoughts: When I look at most partition schemes here on this site or elsewhere, I usually see at most about 400 or 500 GB partitions (maybe I didn't see enough). There may be different reasons for this, but is reliability an issue here? Are larger ext3 partitions, like about 1 TB, harder to handle for the OS or filesystem driver or at some other level? If I make the partition too large, will it be harder to repair in case of corruptions? Are there some default settings for ext3 that I should change for 1 TB partitions? Question: What maximum partition size for ext3 do you recommend and why? Thanks!

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

  • How can I get around windows 8.1 store (& metro apps) not working with UAC disabled

    - by Enigma
    I have UAC disabled because it is annoying and causes more problems that it could ever possibly solve, at least for me. Here is yet another problem, and it seems to be due to a recent update as I don't remember it in the past. Even with a MS account, I can't use the store because UAC is disabled. How can I get around this? Short term I can just enable it, reboot, use store, disable it, reboot and be on my way but there has to be a better way (other than MS getting their software completely right - like that will happen anytime, ever) Edit: Apparently this is far more of an issue than I originally thought. Now every(?)many metro apps requires UAC. Anyone aware of the update this got rolled in with? Thankfully netflix isn't affected which is the only metro app I use at the moment. What I see: Event Log info: Activation of app winstore_cw5n1h2txyewy!Windows.Store failed with error: This app can't be activated when UAC is disabled. See the Microsoft-Windows-TWinUI/Operational log for additional information.

    Read the article

  • Long 'pause' after copying large files on windows 2008

    - by Ian
    I have a mystery regarding pauses after file copies on windows server 2088 (and other releases) When copying large files, like vhds, to locally attached USB disks I often see a long pause after the copy has completed 100%. As an example: robocopying vhd files. The bytes read/written count matches the vhd file size and robocopy shows 100% but it pauses for several minutes. If I do nothing it will continue, but I will have to wait for quite some time - about the same amount of time as it took to get to 100%. The bytes read/bytes written counters for robocopy do not change. My first thought was that the AV had to scan it, but I'm looking at a machine right now which doesn't have an AV installed and this is occurring, so impossible. No other processes are showing read/write byte counts as going up. The behavior is the same if I use the copy command or xcopy. I've seen this on other systems but have never worked out what the cause is. Anyone got any suggestions as to what might be going on?

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >