Search Results

Search found 7204 results on 289 pages for 'almost dead'.

Page 194/289 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Color Printer: Laser vs Inkjet

    - by Mike
    I am about to buy a color printer. I had a B&W Laserjet printer in the past but since then I've used inkjets for decades. I need a printer that can deliver high quality as these photo inkjet printers, but I'm tired of paying for ink that costs $9,000 per gallon (1 gallon = 3.785 liters = 300 cartridges = $9,000). So, I was thinking about buying a color laser printer, but I'm not sure these printers can deliver the same quality and are worth the investment in terms of toner consumption. I remembered that my old Laserjet printer was able to print 1100 pages per toner cartridge. The inkjet printers I have can print 500 pages per cartridge. Price by price, 2 inkjet cartridges have more or less the same cost as one toner cartridge and in theory prints almost the same. I am not sure if this is true for color lasers. What can you guys tell me about quality, toner cost and cost per page for laser or inkjet printer? Is it worth the change? (Keep in mind that an inkjet printer costs $50 and a laser printer costs $200.) Thanks.

    Read the article

  • Why is this APC installation failing so badly?

    - by Matt
    I have multiple instances of APC running on my server with similar configurations (albeit with different cache sizes. However, one of the instances is performing extremely poorly, and I have no idea why (100% cache fragmentation, high miss rate). The runtime settings I'm using are as follows (pretty much out of the box): apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 10M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 APC is version 3.1.6, PHP is 5.3.3-1ubuntu9.5. I've tried restarting Apache multiple times, so this isn't a freak instance. The instance with problems is simply running Wordpress with a few plugins installed. All other instances (~4) on the server are running perfectly fine with almost 100% hit rates and 0% fragmentation; for example this instance is holding a website built using the Symfony framework. Any help would be much appreciated; I haven't had much experience with APC and was hoping for it to be an out-of-the-box speed boost ;).

    Read the article

  • Postfix Whitelist before recipient restrictions

    - by GruffTech
    Alright. Some background. We have an anti-spam cluster trucking about 2-3 million emails per day, blocking somewhere in the range of 99% of spam email from our end users. The underlying SMTP server is Postfix 2.2.10. The "Frontline defense" before mail gets carted off to SpamAssassin/ClamAV/ ect ect, is attached below. ...basic config.... smtpd_recipient_restrictions = reject_unauth_destination, reject_rbl_client b.barracudacentral.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.mailspike.net, check_policy_service unix:postgrey/socket ...more basic config.... As you can see, standard RBL services from various companies, as well as a Postgrey service. The problem is, I have one client (out of thousands) who is very upset that we blocked an important email of theirs. It was sent through a russian freemailer who was currently blocked in two of our three RBL servers. I explained the situation to them, however they are insisting we do not block any of their emails. So i need a method of whitelisting ANY email that comes to domain.com, however i need it to take place before any of the recipient restrictions, they want no RBL or postgrey blocking at all. I've done a bit of research myself, http://www.howtoforge.com/how-to-whitelist-hosts-ip-addresses-in-postfix seemed to be a good guide at first, almost fixing my problem, But i want it to accept based on TO address, not originating server.

    Read the article

  • Windows XP dual screen problems, user account related

    - by Chris
    I have had this issue with a few laptops now and it looks like it is some sort of user account problem. Specifics of the system are: Dell Laptop Windows XP Pro SP3 Non-domain member computer DLP Projector connected to laptop via VGA I use this setup almost daily to do presentations, always the mirrored display mode where I can see on the laptop monitor the same thing that is displayed on the projector. Today, when I boot up, I get the mirrored display at the login screen, but after I log in, it switches to Extended Desktop (like two desktops side-by-side). Fn+F8 just cycles through all the normal settings except the mirrored display. I created a new user account on the computer and it performs normally. Mirrored display works as normal. I have run into this about 4 times now and it always can be solved by creating a new user account on the computer, and then all is well. I would like to either: 1. Find a way to reset the customized settings for a specific user account which would hopefully make this go away, or 2. Find the specific setting that causes this so that I can easily fix it when the problem comes up. Creating new user accounts is kind of a pain and a easy fix must be out there somewhere.

    Read the article

  • Free web-based software for team collaboration/documentation

    - by Jason Antman
    Looking for some advice here, as my search has turned up to be pretty fruitless. My group (9 people - SAs, programmers, and two network guys) is looking for some sort of web tool to... ahem... "facilitate increased collaboration" (we didn't use a buzzword generator, I swear). At the moment, we have an unified ticketing system that's braindead, but is here to stay for political/logistical reasons. We've got 2 wikis ("old" and "new"), neither of which fulfill our needs, and are therefore not used very often. We're looking for a free (as in both cost and open source) web-based tool. Management side: Wants to be able to track project status, who's doing what, whether deadlines are being met, etc. Doesn't want full-fledged "project management" app, just something where we can update "yeah this was done" or "waiting for Bob to configure the widgets". TeamBox (www.teambox.com) was suggested, but it seems almost too gimmicky, and doesn't meet any of the other requirements: Non-management side: - flexible, powerful wiki for all documentation (i.e. includes good tables, easy markup, syntax highlighting, etc.) - good full text search of everything (i.e. type in a hostname and get every instance anyone ever uttered that name) - task lists or ToDo lists, hopefully about to be grouped into a number of "projects" - file uploads - RSS or Atom feeds, email alerts of updates We're open to doing some customizations (adding some features, notification/feeds, searching, SVN integration, etc.) but need something F/OSS that will run under Apache. My conundrum is that most of the choices I've found so far fall into one of these categories: project management/task tracking with poor wiki/documentation/knowledge base support wiki with no task tracking support ticketing system with everything else bolted on (we already have one that we're stuck with) code-centric application (we do little "development", mostly SA work) Any suggestions? Or, lacking that, any comments on which software would be easiest to add the lacking features to (hopefully ending up with something that actually looks good and works well)?

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • Vista gets stuck in an endless loop while booting

    - by Mason Wheeler
    I put my laptop to sleep last night, and when I woke up this morning... it didn't. So I tried to reboot, and everything went fine until it got to the Vista splash screen, where it's supposed to display the logon. Here, it hits an endless loop: Display the cursor with the blue spinny thing that replaced the hourglass, for 5-10 seconds Display "Please wait..." for about half a second Screen flashes to black, then quickly back to the Vista splash screen Goto step 1 The whole time, my hard LED is on almost non-stop. I can boot into Safe Mode... sometimes. Sometimes it'll load all the drivers, then sit there for about 10 minutes, spinning the hard drive non-stop, then reboot with no warning. I tried booting to Last Known Good Configuration. Didn't fix anything. When I've managed to get into Safe Mode, I tried running CHKDSK. Didn't fix anything. I tried running System Restore to each of my last two restore points. Didn't fix anything either time. I ran a virus scan. Didn't find anything. I tried calling the manufacturer (Alienware), only to discover that my warranty expired last freaking week and now I can't get it fixed without paying exorbitant sums of money. I'm about at my wits' end here. Has anyone seen this problem before? Does anyone know how to fix it? Does anyone know a solution that does not involve reinstalling the OS and losing an entire year's worth of program installations, Windows Updates and configuring and tweaking things until it's working just like I want it to?

    Read the article

  • Kernel hacking methodology - how to find out where to hack the linux kernel

    - by Flavius
    I have a throw-away cheap laptop I'd like to twiddle around, a Thinkpad SL 500. What bothers me are two leds, the one for wireless connectivity, and the one for hibernation, which don't light up at all, although they're functional, I've tried it on windows. So I would like to write a kernel driver for them, nothing big, it just looks like a good idea to play around with the kernel. My question is what methodology should I follow systematically to find out what devices are responsible for those leds (in general, not necessarily specific to my hardware), and what drivers are responsible for the other two leds that work, bluetooth and the battery indicator? And when I say methodology, I really mean the methodology, step by step, with reasons for each step, like in the answer I've gave to someone else over here: What does && mean in void *p = &&abc; I am profficient at fgrepping through big code repositories, using static code analysers & co, but I think my lack of hardware knowledge hinders me on this problem. PS: I'm using ArchLinux, so almost the latest kernel version.

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • How do I get a Wireless N PCi card to connect to a wireless G router?

    - by Andy
    I'm having some problems setting up a new wireless PCI card on a WinXP SP3 PC. I know that the router is configured correctly. It is a Linksys WRT54GL, using 802.11b/g. Security mode is WPA2 Personal with TKIP+AES encryption. I am able to connect to this fine using my laptop (first gen MacBook with a 802.11b built in card). The new PCI card is also Linksys, but it supports 802.11n. Card seems to be installed ok (Windows sees it fine, doesn't list any errors in Device Manager), however when it scans for available wireless networks it can't find my wireless network (the router is set to broadcast the SSID). I tried to enter the network SSID manually, but that didn't seem to help. I chose WPA2-PSK for network authentication. The only options for encryption are TKIP or AES - I've tried both, neither worked. I am sure that I typed in my wireless key correctly. At this point, I don't think the problem is with encryption, but something else. It almost seems like I need to switch the wireless card into g mode, but I haven't found a way to do that (if that is even possible/necessary - I thought n was fully backwards compatible with g). Also, the PC is in the same room as the router, and my laptop, so I don't think that it is an interference issue. Any ideas what I'm doing wrong? I'm running out of things to try at this point. :(

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • HTML tabindex: Put some links last without complete enumeration

    - by Emanuel Berg
    I know I can use the HTML anchor attribute tabindex to set the tabindex of links, i.e., in what order they get focused when the user hits Tab (or Shift-Tab). But, I have a home page with tons of links, and to enumerate all those is a lot of work. The actual case is, I have four image links that by default gets index 1, 2, 3, and 4 (well, the behavior is equivalent, at least). But, I'd much rather have the first non-image link as number 1. Check it out here and you'll understand immediately. I tried to give the first non-image link (the link I desire to have tabindex 1) - I tried to give it tabindex 1 explicitly, hoping that it would cascade from there, but it didn't (i.e., the first image link got implicit tabindex 2). I also tried to give the image links ridiculously high tabindexes, but that didn't work: as the other links didn't have tabindexes at all, those highs were still "first". As a last resort (the solution currently employed) I gave the image links all tabindex -1. That makes for logical tabbing, but, it is suboptimal, as those image links are excluded from the tab loop - a user tabbing away will probably never realize that the images are clickable. I'd like them to be reachable with tabbing, but last, after all the ordinary links. If you wonder why I'm so determined to achieve this, it has to do with my own finger habits: I almost exclusively search for links, tab back, tab forth, etc., and very seldom using the mouse. Note: I'll accept a script to change the actual HTML for a complete enumeration, if you convince me there is no "set" way to solve this problem.

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • Converting DisplayPort and/or HDMI to DVI-D?

    - by Jeff Atwood
    Newer Radeon video cards come with four ports standard: DVI (x2) HDMI DisplayPort If I want to run three 24" monitors, all of which are DVI only, from this video card -- is it possible to convert either the HDMI or DisplayPort to DVI? If so, how? And which one is easier/cheaper to convert? I did a little research and it looks like there isn't a simple "dongle" method. I found this DisplayPort to DVI-D Dual Link Adapter but it's $120; almost cheaper to buy a new monitor that supports HDMI or DisplayPort inputs at that point! There's also a HDMI to DVI-D adapter at Monoprice but I'm not sure it will work, either. AnandTech seems to imply that you do need the DisplayPort-to-DVI: The only catch to this specific port layout is that the card still only has enough TMDS transmitters for two ports. So you can use 2x DVI or 1x DVI + HDMI, but not 2x DVI + HDMI. For 3 DVI-derived ports, you will need an active DisplayPort-to-DVI adapter.

    Read the article

  • OpenSwan (IPSEC) on Fedora 13 with Snow Leopard as a client

    - by sicn
    I recently installed OpenSwan on my Fedora 13 machine. I want to use it to connect with Mac OS X with L2TP over IPSEC, unfortunately I am already stuck on the IPSEC-negotation part. My server is running behind a NATted firewall so my external IP differs from the server's IP. The server has a fixed IP on the network and the same is almost always valid for the clients (they are usually behind a NATted firewall). I installed OpenSwan on Fedora 13 and have following configuration: config setup protostack=netkey nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=my.servers.external.ip leftprotoport=17/1701 right=%any rightprotoport=17/0 IPSEC starts fine and listens to UDP 500 and 4500. These two ports are opened in the firewall and are forwarded fine to the server. In my /etc/ipsec.secrets file I have my.servers.external.ip %any: "LongAndDifficultPassword" And finally in my sysctl.conf (the redirect-entries are there because OpenSwan was strongly protesting about send/accept_redirects being active) I have net.ipv4.ip_forward = 1 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_redirects = 0 Running "ipsec verify" gives me "all greens" (except Opportunistic Encryption Support, which is DISABLED), however, when trying to connect my Mac gives me following in the logs: Nov 1 19:30:28 macbook pppd[4904]: pppd 2.4.2 (Apple version 412.3) started by user, uid 1011 Nov 1 19:30:28 macbook pppd[4904]: L2TP connecting to server 'my.servers.ip.address' (my.servers.ip.address)... Nov 1 19:30:28 macbook pppd[4904]: IPSec connection started Nov 1 19:30:28 macbook racoon[4905]: Connecting. Nov 1 19:30:28 macbook racoon[4905]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Nov 1 19:30:31 macbook racoon[4905]: IKE Packet: transmit success. (Phase1 Retransmit). Nov 1 19:30:38: --- last message repeated 2 times --- Nov 1 19:30:38 macbook pppd[4904]: IPSec connection failed Any ideas at all?

    Read the article

  • ESX Scheduler and NUMA issue

    - by babyg_wc
    On our 24 core bl685 (4sockets x 6core), we find that NUMA nodes 0 and 1 are pretty busy (unfortunately resulting in elevated cpu ready times on the VMS), whilst NUMA nodes 2 and 3 are almost unused. I thought this just maybe a ESX4 U1 issue, so I had a colleague with a 32 core (dl785) farm investigate, and it seems that his last 3 or 4 NUMA nodes are also not really being utilised. ESX seems to have a weakness when it comes to balancing lightly loaded NUMA boxes, Im going to enabled node interleaving in the BIOS and see if the scheduler balancers across all 24 cores, instead of just 12!... For those of you with large core counts, I would suggest you fire up you viclient, and check Physical CPU useage (or esxtop), I would be interested to hear what your results are. Please note, that its only the lightly loaded (eg less than 30% cpu load on the esx host) that seems to have the biggest issue with load imbalance. Thoughts/comments. PS ive logged a SR with vmware to assist, also the other "problem" could be that we have 128gb of ram in each host, and therefore the scheduler sees no good reason why it shouldnt try and cram all vms's into the first two NUMA nodes, as we only have around 50gb of ram worth of vms on each host...

    Read the article

  • Repository bugzilla package changed to bugzilla3 in Lenny; upgradable?

    - by Pukku
    This question was asked in debianhelp.org almost half a year ago, but never got an answer. I wasn't the one who posted it, however I was today facing exactly the same question. Not sure if copying it to here as such is considered as inappropriate or something, but there's not really anything that I would even like to paraphrase... So let's just go. (I'm sure you will be happy to close it, if this is not the way to go :) Hello all! We are using a Bugzilla server install on a Debian 4/Etch server and are starting to look at the upgrade to Debian 5/Lenny. I was hoping to upgrade the existing Bugzilla server and database from the oldstable (v2.22) to the newer stable in Lenny (v3) when we get to doing a dist-upgrade. However from testing in a virtual machine it seems that the old package was called "Bugzilla" whereas the Lenny package is called "Bugzilla3" and I could not figure a way to directly upgrade between the two. Is it possible to establish some kind of upgrade path quickly after the dist-upgrade to minimise downtime using apt-get or aptitude? Going on past experiences I would not want to do a fresh install with the Bugzilla3 package and attempt to inject the old database into it (previous attempts failed miserably!) :(

    Read the article

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • An easily customizable linux distribution using minimal disk space?

    - by Frank
    I'm looking for a linux distribution that can be easily used to create my own distribution that's the same system with some software installed. So basically I should be able to create an iso which, when installed, will have the linux distribution with my desired installed. More specifically, I plan on installing mysql and a bit of my own software which shouldn't be too big. However, this distribution needs to be extremely small in terms of disk space. The distribution, including mysql should not exceed 100mb. It should, of course still be able to connect to the internet and perform other standard functions. I don't need X/any sort of window manager, and would prefer not to have it since it would increase disk usage. Currently I have tried ttylinux and tiny core linux. I've found that ttylinux, while is extremely small, has almost nothing so that mysql can't even be installed. Tiny core linux, on the other hand is a bit too big. I've found openembedded and linux from scratch, but I would prefer for the install and build process to be much easier. What other distribution would you recommend for my purposes? Minimizing disk usage is the most important, followed by ease of installing and creating the custom distribution.

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Mail queue directory stuck in IIS SMTP server

    - by Loftx
    Hi there, We have an IIS SMTP server which sends out a largish number of mails (4000 or so) in batches overnight, and recently we've seen mails get "stuck" in the queue directory. Normally restarting the SMTP service seems to fix this, but it's happened a few times so I'm looking for more information. We sent out around 12,000 emails last night in 3 batches of roughly 4000. Around 10 hours later there are still 2000 or so in the queue directory which don't seem to be leaving the queue. Any new mails which appear in the queue are picked up almost immediately and sent to their destination, but these 2000 or so don't seem to move. Looking at the date modified on the emails some match up with the time they were sent, but around 1000 of them have modified dates stretching up to now. e.g. there was one mail with a date in the message headers of 5:30 this morning, but it's date modified is 11:50 and there are 3 other messages with a date modified of 11:50, then 5 with 11:49, 2 with 11:45 stretching back for a few hours and all with actual message headers far earlier. The logs for the server look like this 11:54:52 127.0.0.1 EHLO - 250 11:54:52 127.0.0.1 MAIL - 250 11:54:52 127.0.0.1 RCPT - 250 11:54:52 127.0.0.1 DATA - 250 11:54:52 127.0.0.1 QUIT - 240 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 EHLO - 0 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 MAIL - 0 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 RCPT - 0 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 DATA - 0 11:54:53 85.115.62.190 - - 0 11:54:54 85.115.62.190 - - 0 11:54:54 85.115.62.190 QUIT - 0 11:54:54 85.115.62.190 - - 0 All codes are either 250 or 240 or 0. I believe 250 and 240 indicate success, but I don't know what all the 0s are. Could someone with more experience of mail server troubleshooting give me a hand or tell me what to try next. Thanks, Tom

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Gmail.com detect mail as spam, but the server is not on any BlackList

    - by Tomer W
    I have an issue with Google. (GMail to be exact) About 1 month ago, we had a security breach, and mail was relayed through our servers. we got listed in almost ALL Black-Lists :( we fixed the problem, and requested removal from Black-lists, which was granted easily. currently (over 3 weeks), we are not sending any spam anymore. furthermore, we got clear from all the Black-lists (MxToolBox Black-List Search Result) But, GMail still refuse to get Anything from the server, stating '550 Spam'. Following, Telnet attempt to send to gmail: 220 mx.google.com ESMTP g47si45436208eep.123 helo megatec.co.il 250 mx.google.com at your service mail from: <[email protected]> 250 2.1.0 OK g47si45436208eep.123 rcpt to: <[email protected]> 250 2.1.5 OK g47si45436208eep.123 Data 354 Go ahead g47si45436208eep.123 Test123 . 550-5.7.1 [62.219.123.33 11] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. g47si45436208eep.123 Connection to host lost. i tried filling the form @ Gmail - Report Delivery Problem i also tried reaching Google by phone, but the message was to go to the Link mentioned above. I Checked ReverseDNS and is ok... We dont have TLS, but that shouldn't be a problem, shouldn't it? Note: we are not a Bulk sender. Anyone has an idea? what can be blocking our IP? Anyone know whom can be contacted in order to resolve this BL listing?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >