Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 563/782 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • Is there an high quality natural text reader for the mac?

    - by Another Registered User
    I'm reading about 150 pages of text on screen, every day. I will have to read about 15.000 in the next upcoming months. No joke. Well, the problem is this: I suffer from a sort of attention deficit hyperactivity disorder which forces me to read every sentence up to 10 times until I really get it. Mac OS X Snow Leopard has a built-in text reader with the name "Alex". Although it is already pretty good quality, I know there are far better natural sounding voices out there. I have heard already voices that are absolutely amazing compared to Alex. They're so good, that you can't tell anymore the difference between a real person or a computer. Alex still has this "metal factor" in its voice, which makes my ears hurt after 8 hours of listening. The next problem with Alex is, that he never makes a break after a sentence. Also, it's not possible to think about a sentence and then continue reading. It's also not possible to have him repeat a sentence, without tedious text selection and shortcut usage. Actually, the best tool I can imagine would have the option to read a sentence and move on to the next one after pressing a special key, OR repeating the previously one after pressing a special key. That would help so much! And if that's even with one of those bell lab / AT&T / whatever super-natural voices, even better! But it would be already a great relief if there was just a better tool to control Alex. To let him make breaks after sentences or let him speak big chunks of text sentence-by-sentence with fine-grained control over repetition and moving on. Is there anything?

    Read the article

  • foswiki: hide some topic info when editing in WYSWYG mode.

    - by Mica
    I have a FOSWiki installation with a bunch of Topic templates already defined. the problem is, when a user selects the topic, they are presented with a bunch of extra information that they should not edit, and should not even see really. Is there a way to hide this content in the WYSWYG editor? Example: The topic template looks like this: <!-- * Foswiki.GenPDFAddOn Settings * Set GENPDFADDON_TITLE = <font size="7"><center>Foo</center></font> * Set GENPDFADDON_HEADFOOTFONT = helvetica * Set GENPDFADDON_FORMAT = pdf14 * Set GENPDFADDON_PERMISSIONS = print,no-copy * Set GENPDFADDON_ORIENTATION = portrait * Set GENPDFADDON_PAGESIZE = letter * Set GENPDFADDON_TOCLEVELS = 0 * Set GENPDFADDON_HEADERSHIFT = 0 --> <!-- PDFSTART --> <!-- HEADER LEFT "Foo:Bar" --> <!-- HEADER RIGHT "%BASETOPIC%" --> <!-- HEADER CENTER " " --> <!-- FOOTER RIGHT "Doc Rev %REVINFO{"r$rev - $date " web="%WEB%" topic="%BASETOPIC%"}%" --> <!-- FOOTER LEFT "F-xxx Rev A" --> <!-- FOOTER CENTER "Page $PAGE(1)" --> Header 1 foo etc. etc. etc <!-- pdfstop --> And when the user selects the topic template, they get all that in the WYSWYG editor. I would like to hide all that so when the user selects the topic template, they get Header 1 foo etc etc etc Without any of the other mark-up.

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

  • Exchange 2007 relay from sendmail, message "Undelivered". Possible reasons?

    - by garlicman
    Note: This is my re-post from Stackoverflow. I've been messing with a test environment for security purposes where a DMZ RHEL5 sendmail server is used as a relay for an Exchange 2007 server. Exchange is working in the environment, I have Vista and XP VMs using Outlook on the Domain to send e-mail to each other. I've been trying to simulate an external internet VM sending an e-mail to the DMZ sendmail relay, which forwards to the Exchange server. Before everyone thinks this is too big a problem/question, I've followed the sendmail/Exchange guides and all I want to know is how I can determine why a relayed message/e-mail in Exchange is "Undelivered". Basically I send a SMTP message to the sendmail server, which relayed to my Exchange. The /var/log/maillog shows the e-mail being relayed to Exchange. Nov 17 13:41:22 externalmailserver sendmail[9017]: pAHIfMuW009017: from=<[email protected]>, size=1233, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=[10.50.50.1] Nov 17 13:42:17 externalmailserver sendmail[9050]: pAHIfMuW009017: to=<[email protected]>, delay=00:00:55, xdelay=00:00:36, mailer=relay, pri=121233, relay=mailserver.xyz.local. [192.168.1.20], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) This is good, but the To never receives the e-mail from Exchange. So I started poking around Exchange. In the "Message Tracking" Troubleshooting Assistant I queried the processed messages and found this: (I had to copy and paste the cells... sorry for the format) 2011/11/17 RECEIVE SMTP <[email protected]> "Undelivered Mail Returned to Sender" [email protected] [email protected] 192.168.100.10 MAILSERVER\DMZ Relay [email protected] I just want to know if anyone has any suggestions on why the DMZ Relay Connector I setup isn't relaying and is instead returning the forwarded e-mail to sender as Undelivered? My Exchange Relay Receive Connector is pretty simple. The Exchange server's FQDN is set as the HELO response, all available IP addresses can receive relayed e-mail, and the IP address of my sendmail server is specifically set as a remote server.

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • Throughput; capacity planning help for C10K like design

    - by z8000
    I am designing a network service in which clients connect and stay connected -- the model is not far off from IRC less the s2s connections. I could use some help understanding how to do capacity planning, in particular with the system resource costs associated with handling messages from/to clients. There's an article that tried to get 1 million clients connected to the same server [1]. Of course, most of these clients were completely idle in the test. If the clients sent a message every 5 seconds or so the system would surely be brought to its knees. But... How do you do less hand-waving and you know, measure such a breaking point? We're talking about messages being sent by a client over a TCP socket, into the kernel, and read by an application. The data is shuffled around in memory from one buffer to another. Do I need to consider memory throughput ("5 GT/s" [2], etc.)? I'm pretty sure I have the ability to measure the basic memory requirements due to TCP/IP buffers, expected bandwidth, and CPU resources required to process messages. I'm a little dim on what I'm calling "thoughput". Help! Also, does anyone really do this? Or, do most people sort of hand-wave and see what the real world offers, and then react appropriately? [1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/ [2] http://en.wikipedia.org/wiki/GT/s

    Read the article

  • My desktop has started overheating -- how hot is hot?

    - by Jerry
    I have a two year old desktop, some random quad core HP desktop. It used to run very quietly, but in the past month, the fans start up anytime anything "serious" is being done -- compiles, playing video, etc. Right now, speedfan and speccy report the cores are between 50C and 70C. Speedfan reports this as hot. (Nice flame icon.) Well, the system does sit on my carpet, so two weeks ago, I took off the lid, and cough *cough* it was pretty filled with dust. I got out an air can, turned on a vacuum and carefully got out all the dust that I saw on the CPU fan the case fans any fan I saw (graphics board) and blew out all the dust I could from all the circuit boards. And then I closed the case back up. It has definitely run cooler since then, but it still runs hot, and I hear high speed fan noise I never heard before. How hot is too hot? At what temps do consumer grade CPUs die? What should I be looking to do? Replace CPU fan? (It seems to work) Replace power supply fan? Assuming the dust problem is gone, where should I be looking to determine why the machine is heating up? Epilogue: After following the various pieces of advice given here, the system did run cooler, but it was still noticeably running louder (hotter) than just a few months prior. I ended up purchasing a new cpu heatsink and fan and during installation found the cooling grease from the original heatsink was just a dried, cracked layer, probably more of an insulator than heat transfer agent. With the new fan AND the new heatsink compound, the system ran much much cooler and the fan rarely turns on.

    Read the article

  • I can't connect to my network, except in safe mode

    - by eidylon
    My laptop cannot connect to my network all of a sudden except in safe mode. When it boots, it will show the networks available in the tray popup, but if I click connect on any, it says "Unable to connect" and the troubleshooter is useless. Shortly thereafter all the networks disappear. I have tried removing IPv6 support as I have seen that cause problems. No joy. I've also tried removing the wireless network adapter in Device Manager and reinstalling it, also no joy. I've also tried attaching a USB wireless adapter, and it has the same problem. If I boot in safe mode, then it has no problems at all. Three other devices in the house connect fine, so I am pretty sure it is nothing to do with the router. Any ideas what to check next? I am running Win7 Ultimate on a 2GHz Quadcore with 8GB RAM with a Broadcom 802.11n wireless card. EDIT: RE wired connections: What is very weird is that if i plug in a wired connection, then not only does it connect via the wired connection, but the wireLESS also starts working perfectly. And a soon as I unplug the wire, then the wireLESS stops working again! So it seems the wireless is right now working only in safe-mode, or when a wired connection is also plugged in.

    Read the article

  • Best way to convert episodic DVDs for Windows Media Center?

    - by Roger Lipscombe
    I'm archiving my DVD collection. My goal is to be able to play them back in Windows Media Center. For feature-length DVDs, I'm using AnyDVD and CloneDVD, which is working well. For playing back TV shows (and other episodic content), I'm using Media Browser, which doesn't support a VIDEO_TS folder per episode. It expects the shows to be broken up into one file per episode (e.g. "Willo the Wisp - S01E12.avi"). For this, I'm attempting to use Handbrake, which, for extracting the episodes from DVD (or already-ripped VIDEO_TS folder), is working pretty well. The problem that I have is that the default x264 encoder over-compresses the resulting video stream, which results in hideous artifacts in animated shows. The aforementioned Willo the Wisp is a particularly bad example, because the original DVD is particularly "noisy". If I switch to using the ffmpeg encoder, the artifacts are gone in Windows Media Player, but I can't get the resulting files to play back in Windows Media Center. I see the first frame, and then there's an error message. I've installed the CCCP codec collection, but it doesn't seem to have made any difference. So: what's the best way to convert VIDEO_TS to individual episode files for playback in Windows Media Center?

    Read the article

  • PLS HLP Chrome & Internet Explorer won't connect after infected Fire Fox works.

    - by Zack
    HI Guys Please Help I am pretty New Here. I'm having problems. Cannot connect with chrome or Internet Explorer. Fire Fox works fine. It seems it happens when I was infected by a "Trojan Horse Generic 17.BWIK" and a "Trojan Horse SHeur.UHL", when I reply to a post for a Thread I posted. I have removed the treat and got Fire Fox working, "so i think", but not G'Chrome or IE still cannot connect. I do not want to loose Chrome History so re-setting would be my last option and uninstall and install will be out of the question. Is there a way around this? I am using XP Pro on a desktop and DSL connection. Be aware from "Fake_Antispyware.FAH", which I had on my computer, I just found out while doing this, according to my AVG anti-virus security. Please can you direct me for a cure. Thank you in advance for your sincere willingness contributions.

    Read the article

  • pgpoolAdmin keeps going straight back to it's login page

    - by user705142
    I'm trying to run pgpoolAdmin through nginx - it seems to be working properly, at least initially. I've gone through the initial set-up, which works fine, but now after logging in every link takes me straight back to the login page. It also shows japanese text instead of english, despite picking english in the installation. It seems to me just as if it was unable to save any user data, session information etc. I have javascript/cookies enabled, so it's not that. The ownership of the folder is nginx, and so too is pgmgt.conf.php, so it shouldn't be a problem with permissions. One potential issue is that I can't seem to see any confirmation that php postgresql support is enabled in the php info screen, despite the correct package installed and in the config line. Any ideas as to what's happening here? The nginx rules are pretty standard: server { # pg-pool admin listen 997; server_name localhost; root /opt/pgpooladmin; index index.php; location ~ .php$ { fastcgi_pass_header Set-Cookie; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Looking for recommendations on OCR problem - tabular numeric data

    - by ldigas
    I have 20 pages of experiment measurement data which I need to digitalize. The results are in tabular form, scanned in 600 dpi resolution, and as far as scans go, they came up pretty clean and readable. For an example of how it looks see here (but beware: it is a rather big scan; about 5Mb; no problem for any broadband connection, but dialups should approach with caution!) ... and I need it finished by sunday afternoon (:-o) <-- smiley in a state of panic (then why did't you start sooner?)... yea, yeah ... I know ... but, it came up late, and I wasn't thinking I was gonna need this data also. So, I'm looking for recommendations. I haven't much experience with OCR programs, save scanning a page or two of pure text, but just to mention, I haven't the wish also to test out every OCR program out there. So this isn't a "name your OCR favourite". What I'm looking is advice from someone who's done something like that, and his/hers experience on what would be the best way to undertake. I need the data in txt form but since it will have to be checked (by drawing it, and just simply watching whether some points "jump out") I'll probably be entering it in Excel at first.

    Read the article

  • Why can't I get rid of default index.html even if I disable the default virtual host in Apache2?

    - by Emre Sevinç
    I have created a virtual host settings file and I disabled the default settings by using a2dissite default (this is a pretty standard Ubuntu 10.04 installation). But no matter what I try my Apache2 server simply keeps on displaying the default index.html page instead of the index.php page that I set up in the virtual host file. Can someone help me what I'm missing. Details follow: No default settings: ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 51 May 5 13:32 webmin.1273066327.conf -> /etc/apache2/sites-available/webmin.1273066327.conf lrwxrwxrwx 1 root root 34 May 30 11:03 www.accontax.be -> ../sites-available/www.accontax.be Contents of the relevant virtual host: cat /etc/apache2/sites-enabled/www.accontax.be <VirtualHost *> ServerName www.accontax.be ServerAlias accontax.be DirectoryIndex index.php DocumentRoot /var/www/drupal/ <Directory /var/www/drupal/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Contents of httpd.conf: cat /etc/apache2/httpd.conf Listen 80 NameVirtualHost * I also have those relevant lines in my apache2.conf: # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ When I visit http://www.accontax.be I expect apache2 server go to the /var/www/drupal subdirectory and start serving index.php but it simply keeps on serving index.html from /var/www directory. I have reloaded the configuration, restarted the server, deleted my browser cache. Nothing changed. Probably I'm missing a simple yet crucial step but I just could not find it.

    Read the article

  • Using gentoo, how does one stick -9999 ebuild to a specific svn revision?

    - by hurikhan77
    As an example given the django-9999 ebuild, to match the developers environment I need to checkout R12120 from trunk. Installing Django manually is not option due to package management reasons. But there is also no ebuild in portage for 1.2 beta versions. So I did the following: ESVN_OPTIONS="-r12120" emerge -1a django Which installed the required revision from svn. But this is cumbersome in a way. Is there some way to define this statically per ebuild, eg something like: DJANGO_SVN_REV="12120" in make.conf. This would be much cleaner in my eyes. Because next time I need to rebuild django for whatever reason, I need to remember: "Oh I wanted this to stick to a specific revision" and next question will be "err, f&!#$?%, what was it again?" What's the best way to go here? Keep in mind: Manually installing packages without package manager knowledge is no option Working around with manual emerge variable prefixing is no option Setting up a /etc/portage/package.env would be a way to go (as described here) but that seems pretty unsupported and kludgy to me and thus unpreferable Modifying make.conf would be a way to go Keeping the ebuild in an overlay would be an option

    Read the article

  • Why are Microsoft Windows Update taking so long to install?

    - by Mathieu Pagé
    Hi, I have a question that is not related to a problem I have. Just something I'd like to understand. Why are Windows update so long? First Windows Update need to find witch updates you needs and this take about 5 minutes. What is happening behind the scene during those 5 minutes? I would have tought that it would be enough to compare the updates you already have to the complete list of updates or to check the version numbers of a couples files. Then when it comes time to install the upgrades, they're also taking a long time. Some 1 Mb updates takes 2, 3 or 5 minutes to install. What is taking so long. I would have though that it was simply a mater of backup the old file, uncompress the new files, replace the old file. This should be really fast. Is Windows doing something else? For comparison, under Linux, you can find which updates you need in about 20 seconds and installing them is usually pretty fast (The time to uncompress the files). I can do a complete updgrade of my linux machine in about 25 minutes (download 600-800 Mb of updates, hundreds of them and install them) while under windows 25 minutes is the time it needs to find witch update are needed and install about 5-10 updates. I just updated a Windows XP home from SP1a to SP3 + all other updates. It took me more than 3 hours. Doing something like that in the Linux World takes about 30 minutes. I don't want to bash Microsoft here. I genuinly want to know what they do differently that makes it so long.

    Read the article

  • Exchange server issues - can't upgrade to SP3 - trying to migrate to Exchange 2010

    - by Keith
    Our Exchange server is having a lot of issues. It can't get Windows Updates anymore (error 8000FFFF) and it has a lot of other issues that are all related (Server Manager error - Catastrophic failure exception hresult 8000FFFF). Everything I've read online about it says you pretty much have to re-install Windows to fix it. Because of that, we are going to migrate to a new server running Exchange 2010. I have the new server ready and I was doing the pre-requisite checker and it was complaining that the Exchange 2007 installation wasn't SP2 or newer. I checked and indeed, it is SP1. So I am trying to upgrade the Exchange 2007 installation to SP3, however, it is failing. It is hanging on "Removing Exchange files." I followed these instructions and it's still not working. I can get to the part where you run the upgrade from command line but it starts asking for the exchangeserver.msi file. I point it to where it is located but it keeps asking for it. I am starting to get concerned that I can't upgrade the Exchange server because of the same issues above. My next step is to call Microsoft about the issue because I need to get it fixed however I wanted to check here first.

    Read the article

  • Finding ALL currently used IP addresses of Website

    - by Patrick R
    What steps would you take to discover all (or close to all) IP addresses that are currently used by a website? How would you be as exhaustive as possible without calling a website admin and asking for the list of IP addresses? ;) nslookup works but will vary based on dns server queried. whois is another good tool. Dig, not bad. Let's use Facebook for example. I'm blocking that site for the majority our our company's users, but some are approved for "research". I can not easily use OpenDNS because we all appear to come from the same request IP address. I could change that but don't want to add more vlans than I already have. I also could use block something like regex facebook1 "facebook\.com" (I'm running a cisco firewall) but that's pretty easy to sidestep. All that being said, I'm asking about specifically about finding ip addresses for a domain and not for other methods that I can block a domain name.

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • Reduce power consumption of gaming computer while idle

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 clocked @ 3.3 Ghz 3x DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 oc'd at 800/1600/1850 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) I've already seen this post Reduce power use on computer and this post How do I lower power consumption of my computer and while useful, I'm looking for answers specific to my build and OS. I'm pretty sure this build is a energy-intensive build by default, but I want to try to reduce the amount of energy my build uses when I leave it idle (when I go to bed or go out, etc). The first requirement for this machine is that I need to leave it on, so I cannot turn it off while it's being unused. I run it as a file server for personal reasons and I also leave it on in case people leave me messages on various IM services and chat clients (IRC, MSN, Steam, XFire, Pidgin, etc). I'm also unable to replace the parts in my computer with a cheaper "greener" part. What are some ways to minimize the amount of power the machine uses? I'm already using a high efficiency power supply (80 Plus Gold), but I imagine there's other things that can be done in the BIOS and Windows' power settings to reduce power usage while I'm not using the computer. From what I can tell, I can't use Sleep since that'll disable network access (whole reason why I leave the computer on in the first place). I already turn off my monitor when it's not in use. I enabled Intel SpeedStep within the BIOS (I know, I have a 965 and why am I enabling SpeedStep?) Should I bring the graphics card back to stock speeds and lower the clock on the processor even more? Main reason why I'm asking is I think this computer alone is the reason why my power bill is high, so I want to reduce its consumption to as low as possible without having to shut the thing down.

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Issues installing apache debian

    - by Belgin Fish
    I'm having issues installing apache2, and pretty much everything in general, I'm using debian. I run sudo apt-get install apache2 and then it returns root@debian:~# apt-get install apache2 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-prefork (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-event (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-itk (= 2.2.16-6+squeeze7) but it is not going to be installed Depends: apache2.2-common (= 2.2.16-6+squeeze7) but it is not going to be installed E: Broken packages Not really sure what's up :S Seems like it can't find any of the required packages for anything, Anyone know what I'm doing wrong?

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >