Search Results

Search found 10741 results on 430 pages for 'linton day'.

Page 331/430 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • Why would my HDD main partition suddenly become hidden?

    - by Luis Oscar
    A few days ago I was using my computer as usual and I turned it off. The next day it wouldn't boot up. It just stayed after the hardware diagnostic window on the intermittent underscore screen. So clearly It wasn't booting up. I tried turning it on and off a couple of times with no avail. Finally I used windows 7 disk and it seemed as if there was no HDD. not even the installation would see the HDD. So i thought it was dead, i bought a new one installed it with windows and used a external case with the old HDD. plug it in and still couldn't see it. I finally downloaded a partition program EASUS or something and my HDD was there listed WITHOUT a system letter. I could however explore it and i set it as Unhidden and it came back to life. I could see it normally. I really just wish someone could explain to me what happened here, was it a virus? does it means the HDD is about to die? How can i prevent this or what should I do now? should i stop using this OLD HDD? Thanks

    Read the article

  • Disaster After Removing Two HDD From LaCie RAID 0 Case

    - by John
    This is the second time this has happened. I own a LaCie IDE RAID 0 Enclosure and the RAID went bad. The system gave me a warning that the data could be read from the RAID but that nothing could be written, and to remove the data ASAP. I did that and erased and reinitialized the RAID. System reported it was fine, no issues. I wrote to the RAID again and the system reported the same issue. So, I removed the drives and tested them individually thinking one must have gone bad. Sure enough, one HDD reported all bad blocks, every single one after the Master Boot Record. I didn't think much about it because of the age of the drives, 5 years old. So, I bought two new drives plugged them in and started up the RAID again. Exactly the same thing happened. All was fine after initializing the RAID and then the next day after powering on the RAID the exact same issue. The HDD sitting in the same position as the first "bad" HDD reported all bad blocks. Obviously, this is an issue with LaCie's bridge board not with the drives. No utility I have used has been able to bring this HDD back to life. I thought I would just copy the MBR from the good drive to the new one using a sector editor but am hesitant. Is it possible the firmware on the HDD has been corrupted by the LaCie bridge board?? What else could be the cause of such an issue? How can I fix this drive?

    Read the article

  • "Learning" Linux

    - by Strider
    I've been interested in computers for a long time and have fiddled with a lot of stuff which includes Linux. I started out with Red Hat when I was young (around 13) and lost all data, converting a FAT32 drive to something else. Later it was Knoppix which was really helpful in recovery and such. Then, it was Ubuntu. Also, I fiddled with Arch for some time, but, it breaks too often for my liking (maybe, I should have been more careful). Anyway, currently I use Ubuntu 9.04. I want to dig deeper into the Linux world now. I want to learn how things work and use the terminal more. I am a programmer as well, so, it will help a lot. So, the thing I wanted to ask were: Good books to learn and understand Linux Good habits to use Linux more efficiently. Good tools about which I should know. Amount of time you set aside to learn about new things each day. As a programmer, how do you setup and use Linux efficiently. Long list. I will be grateful to the answerers.

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

  • Thunderbird: how to move mails into correct thread? (mailing lists)

    - by unor
    I'm subscribed to some mailing lists and every day people reply to a wrong mail (or they don't reply at all), so that their mail lands in the wrong (or a new) thread. I set the mail display in "View ? Sort by" to "Threaded". Example: mailing list "Foobar": [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic [Foobar] I'm John Doe Re: [Foobar] I'm John Doe Re: [Foobar] Welcome, John Re: [Foobar] random topic There are two discussions, one about "random topic", one about "John Doe". The subject line changes in the discussion about John Doe, which is fine (no problem here). But the last mail should be pigeonholed in the first thread. Instead it is at the top-level. Now, how can I move that last mail into the correct thread? I tried to drag&drop it at the mail I think it should be a reply to, but this doesn't work. I think theoretically it should be possible by fiddling with the mail headers after receiving the mail, but this doesn't seem to be a comfortable way.

    Read the article

  • iptables port forwarding works only for localhost

    - by Venki
    Below is my iptables config. I used this for my accessing a node js website running in port 9000 through port 80. This works fine only if access the website through local host / loop back. When I try to use the ip of eth0, which is assigned by my router through dcp. this does not work, when I use ip like 192.168.0.103 to access the website. I am not able to figure what is wrong here, Already burnt a day in this, still not able to figure out :( Edit: ( more information) Earlier, I was using this configuration to develop the website, i had configured the domain name to point to 127.0.0.1 in the /etc/hosts file. It was working fine, but now I am trying to deploy the website in a vps with static ip, This configuration does not work with both static IP. # redirect port 80 to port 9000 *nat :PREROUTING ACCEPT [57:3896] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4229:289686] :POSTROUTING ACCEPT [4239:290286] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 -A OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 COMMIT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT -A INPUT -p tcp --dport 9000 -j ACCEPT -A INPUT -j REJECT

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

  • WT-NMP - PHP-CGI randomly stops running with no error log

    - by alexfontaine
    We have recently installed WT-NMP and are currently running Php-Cgi with php 5.4.24. We are running fairly simple php scripts and when testing everything is running fine. Over the weekend we wanted to keep the server running test it over a longer period of time. The server and scripts ran fine all day on Friday, but sometime late on Saturday, the php-cgi stopped running. There are no errors in the error log (C:\WT-NMP\log). In the configuration (php.ini) I have the following options set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On html_errors = On error_log = "c:/wt-nmp/log/php_error.log" We also have the standard nginx.conf error logs: access_log "c:/wt-nmp/log/nginx_access.log"; error_log "c:/wt-nmp/log/nginx_error.log" warn; So, since the log directory is empty, I am assuming that the running php scripts and general nginx operations are not causing the php-cgi to stop. So my questions are: What else could cause the php-cgi to stop running? Are there any other options for logging that we could turn on that could help us track this down? Are there other log locations that we should be looking at? Thanks!

    Read the article

  • Tomcat web application intermittent freeze

    - by tinny
    I have a Grails web application (just a standard war file) deployed on a Ubuntu 10.10 server running on tomcat 6. My database is postgresql. The problem is that every so often (once or twice a day after inactivity) when I try to log into this web application it just freezes. I can navigate to the login page but when I try and login (first time the DB is hit, might be a clue..?) the application just freezes indefinitely, no 500 response code... the browser just waits and waits. I followed the instructions detailed here because the problem described sounded the same as mine. My GC logging showed no long running GC, all sub sec. When the application freezes a jmap heap output is... using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 536870912 (512.0MB) NewSize = 21757952 (20.75MB) MaxNewSize = 87228416 (83.1875MB) OldSize = 65404928 (62.375MB) NewRatio = 7 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 85983232 (82.0MB) Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 19595264 (18.6875MB) used = 11411976 (10.883308410644531MB) free = 8183288 (7.804191589355469MB) 58.23843965562291% used Eden Space: capacity = 17432576 (16.625MB) used = 9249296 (8.820816040039062MB) free = 8183280 (7.8041839599609375MB) 53.05754009046053% used From Space: capacity = 2162688 (2.0625MB) used = 2162680 (2.0624923706054688MB) free = 8 (7.62939453125E-6MB) 99.99963008996212% used To Space: capacity = 2162688 (2.0625MB) used = 0 (0.0MB) free = 2162688 (2.0625MB) 0.0% used concurrent mark-sweep generation: capacity = 101556224 (96.8515625MB) used = 83906080 (80.01907348632812MB) free = 17650144 (16.832489013671875MB) 82.62032270912317% used Perm Generation: capacity = 85983232 (82.0MB) used = 62866832 (59.95448303222656MB) free = 23116400 (22.045516967773438MB) 73.1152232100324% used Anyone know what "From Space:" is? Any ideas on further fault finding ideas? I dont have much experience with this type of fault finding.

    Read the article

  • exim4 redirect mail sent to *@domain1.example.com to *@domain2.example.com

    - by nightcoder
    Current situation: We have a VPS that hosts a website example.org. Exim is configured to work as a smarthost. All emails sent through exim are successfully relayed to another mail server (that is working on example.com). Goal: To forward mail sent to *@example.org to *@example.com, i.e. change the recipient's address from *@example.org to *@example.com. Problem: If I send email to address *@example.org, then it seems exim doesn't change the address, it still relays the message to another mail server but recipient is still *@example.org. Maybe the redirect is not applied for some reason. Configuration and logs: /etc/exim4/update-exim4.conf.conf: dc_eximconfig_configtype='smarthost' dc_other_hostnames='' dc_local_interfaces='' dc_readhost='example.org' dc_relay_domains='example.org' dc_minimaldns='false' dc_relay_nets='0.0.0.0/32' dc_smarthost='example.com::26' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='maildir_home' /etc/exim4/conf.d/router/999_exim4-config_redirect (created by me): domain_redirect: debug_print = "R: forward for $local_part@$domain" driver = redirect domains = example.org data = [email protected] (for now data is set to a specific address for simplicity and testing) exim log when sending email to [email protected] (should be redirected to [email protected]): 2012-03-20 19:40:07 1SA4ud-0005Dw-7k <= [email protected] U=www-data P=local S=657 2012-03-20 19:40:08 1SA4ud-0005Dw-7k => [email protected] R=smarthost T=remote_smtp_smarthost H=domain2.com [184.172.146.66] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,2.5.4.17=#13053737303932,ST=TX,L=Houston,STREET=Suite 400,STREET=11251 Northwest Freeway,O=HostGator.com,OU=HostGator.com,OU=Comodo PremiumSSL Wildcard,CN=*.hostgator.com" 2012-03-20 19:40:08 1SA4ud-0005Dw-7k Completed So, the address is not changed :( Please help! I'm trying to make it work for half a day already :(

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • Problem importing pictures from a digital camera with Windows 7

    - by snark
    Hi, Since I'm using Windows 7 (Beta, then RC, now RTM), I have issues when I download pictures from my digital cameras. It happens with my 2 cameras: a Canon Powershot S2 IS and a Canon Ixus 80 IS. When I plug a camera (any of them) to a USB port and switch it on in Play mode, the Autoplay function of Windows 7 starts with this screen: I select "Import pictures and videos" to call the native Windows 7 tool. It searches a bit for pictures to download from the camera and starts the transfer. However, during the transfer, I often get errors like this one: If I use "Try again", it works fine the second time and the picture is retrieved correctly. It's very annoying when it happens 20 or 30 times in a 500-picture download. I cannot leave it running standalone, as I have to watch for the errors and click on "Try again". Any idea what is causing these errors? I tried changing the USB port (normally the cameras are connected via a USB hub but it happens also when I connect them directly to a MB USB port) and the USB cable, but no success. I also checked the SD card by connecting them with a card reader and running a ChkDsk on them but it found no errors on the cards. Update: No problem when I copy the pictures manually with the Windows Explorer. And no problem either when I access the card with a reader. The builtin import tool of Windows is convenient as it sorts the pictures automatically by date (1 folder per day). And this is the way I sort my pictures

    Read the article

  • What LTO 4 drive to buy

    - by pplrppl
    Evan Anderson mentioned in another solution you could buy a LTO-4 (autoloader, 1 tape / day) - $4,566.00 (the discussion included total cost of tapes for a specific rotation.) but I don't know specifics on what he or you would recommend for the actual drive and if necessary controller. Show me a newegg URL or CDW, Dell, or HP, or whatever your favorite vendor would be for your solution if you don't mind looking it up or just give me a brand and a model number and I'll be glad to do the leg work myself. I currently have on have on hand an external LTO 3 drive that uses LVD SCSI interface (and thus have a controller card that has an external LVD SCSI connector). If that card isn't sufficient to interface to a LTO 4 drive let me know. http://www.fujifilmusa.com/shared/bin/LTO_Overview.pdf shows minimum tape speeds for LTO4 and other LTO formats. It looks like the IBM LTO4 actually has a lower minimum speed than the IBM LTO3. Either way my average server is too slow to feed LTO3/4 without shoeshining so I'm looking for a drive with a low minimum write speed. If you trust the PDF from 2008 that makes my choices IBM LTO 4 full height IBM LTO 4 half height HP LTO 4 half height but presumably there are other options out there that weren't mentioned in the fuji PDF. Again I'm looking for a specific recommendation on a drive to buy (and the controller if needed).

    Read the article

  • If spambots are filling out forms on my website, should I mark the mail as spam?

    - by rob
    Apparently spambots have discovered the contact forms my website. I'm considering adding a CAPTCHA, but in the meantime I've been getting dozens of completed website forms in my Gmail account each day. I know I can configure a filter to always allow e-mails from my website to be delivered (i.e., "never send to spam"), but I'm wondering whether there are also larger implications. At first, I was marking the spam forms as spam because I figured it would block the sender's e-mail address from getting through again. But since then, I've given it some more thought. I know many spam filters use Bayesian filters and other techniques to identify spam based on its content. If I mark all these e-mails as spam, will legitimate e-mails from my website be marked as spam? Even if I stop marking the messages as spam, I can see another potential problem. My form is pretty standard (in fact, it's probably very similar to other online forms). If I mark my spambot-submitted website forms as spam, will Gmail begin to mark other people's similar-looking messages as spam?

    Read the article

  • Is there a way to force spam-filter to change their policy or remove them as recognized spam service?

    - by Alvin Caseria
    As per mxtoolbox I got 1 blacklist still active for quite sometime now. UCEPROTECTL1's is running on 7 day policy since last spam mail. This is too strict compared to the 98 other spam filters out there as per mxtoolbox. (Or at least to the other 4 that detected the problem) I have no problem with our e-mail since it is hosted locally. But our domain is hosted outside the country and it run on a different IP. I contacted them but since it is the spam-filter's rule, there's nothing to be done but wait. I do believe services like spam-filters should at lease be bounded by guidelines and standards for this matter. Otherwise problem on delivering valid (after the fix) e-mails will be disastrous. Is there a way to force UCEPROTECT to change their policy or remove them as recognized spam service? Apart from contacting them in case they do not answer. Currently they are charging for fast removal if you pay by PayPal. I'm still looking for guideline/standard on how they should operate regarding this matter. Appreciate the help.

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • POST Fail via AJAX Request?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost (submitted to stackoverflow, but maybe a server issue?). I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Backup and Archive Strategy Question

    - by OneNerd
    I am having trouble finding a backup strategy for our code assets that 'just works' without any manual intervention. Goal is to have an off-site backup (a synchronized one) so that when we check-in files, create builds, etc. to the network drive, the entire folder structure is automatically synchronized and backed-up (in real time, or 1x per day) at some off-site location so if our office blows up, we don't lose all of our data. I have looked into some online backup services, but have not yet had any success. Some are quirky/buggy, others limit file size and/or kinds of files (which doesn't work well for developer files). Everything gets checked in and saved to a single server (on a Raid Mirror), so we just need to have a folder on that server backed up/synchronized to some off-site location. So my question is this. What are you using for your off-site backup strategy. What software, system, or service? Is there a be-all/end-all system of backing up your code assets that I just haven't found yet? Thanks

    Read the article

  • Windows 7 "freezes" (chills?), and then "unfreezes" after about 1 minute.

    - by gbc001
    Hi, I have an Acer Timeline 1810T netbook (4GB RAM) with Windows 7 x64. About once or twice a day, it "freezes" - the reason I put this in quotation marks is that it does not really freeze, as I you cant move mouse, etc. I can move my mouse and jump between different applications, but I cant use the applications for anything. So I can jump between notepad and Firefox, but I cant browse to a new web page. I have been trying to determine the source of this misery for a while now, and I suspect it has something to do with the hard drive - indirectly if not directly. Here are some screen shots of the resource monitor during a "freeze" and during normal operation: Freeze: http://imgur.com/Gcgq1.jpg Normal operation: imgur.com/mlHaI.jpg As you can see, CPU is fine during freeze, but the disk is going bananas.. Does anyone have an idea of what these reading means, or about the problem in general? There seems to be no specific activity that sets this off - it can be during browsing, or during media playback with nothing else open. The fact that I can see navigate to applications but not use them might suggest a hard drive problem as well? Maybe I can access the stuff that is in RAM, but not anything that would require interaction with the drive.. Very appreciative of any help!

    Read the article

  • Windows XP consuming drive letters

    - by billdehaan
    This one's a bit of a stumper. I'm running XP SP3, current with all fixes, etc. My problem is that I can assign a drive letter to a container file (explained below), it works just fine. But once I close the container, the drive letter is no longer available until the next boot. I've got some confidential data that I've placed in a container volume. I've used TrueCrypt (www.truecrypt.com) and FreeOTFE (www.freeotfe.org), with both installed and portable versions for both, with the same result. I open the container file, assign it to a drive letter (say R:), and run some portable apps that are within the volume. When I'm done, I close the container, and the drive letter is released. Fine so far. However, when I attempt to re-open it, the previous drive letter (in this case R:) is no longer available. It's not mapped to anything, it's just unavailable. Even attempting something like "subst R: C:\" returns "Invalid Parameter - R:". I can use the S: drive, no problem, but the next day I have to use T:, then U:, etc. Eventually, I have to reboot to reclaim all of of the drive letters. Unfortunately, everything I've read about drive letters relates to USB assignments, which doesn't apply here. I've tried the "show hidden" command (set devmgr_show_nonpresent_devices=1) with no success. And the Disk Management tool doesn't apply either, since it's not a physical drive. Does anyone know where Windows keeps the list of drive letters? And is there anything short of a reboot that can be used to reset it?

    Read the article

  • Mac Security - Which one?

    - by Bob Rivers
    Hi, Recently I had my credit card cloned. A few hours after shopping at an online store (in which I trust and buy since 2006) I received a call from my bank asking if I recognize a $5,000 debt to a store(?!) called Church of Christ... I'm a Mac user (OS X 10.6.3). I always kept my system updated and I have firewall enabled (in my Mac and in my broadband router), but I decided to adopt some kind of protection. I don't want to rise passionate discussions. Real or not, snake oil or not, I need to have back my peace of mind... I read this and this posts. I selected two software that I think that could help me (both have more features other than just an antivirus). Does someone have feedback about Intego's VirusBarrier X6 or Trendmicro's Smart Surfing? Intego solutions seems to be better, but TrendMicro brand/name is stronger in corporate environment, so their solution should be good. Both solutions have 30 day free trial, but I would like to hear something from you. Any other solution that I should look? TIA, Bob

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >