Search Results

Search found 491 results on 20 pages for 'unusual'.

Page 13/20 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • How can I copy this quote from PDF?

    - by isme
    I'm reading a PDF copy of Jerome H. Friedman's paper "Data Mining and Statistics: What's the Connection?" using Google Chrome and the Adobe Reader plugin. It contains an amusing quote that I want to copy and paste to my blog. I used the mouse to select the text of the quote and pressed CTRL + C to copy the text. The document looks like this: When I paste the text into Notepad, Stack Overflow, or anywhere else, the product is Wingdings-like gibberish: ????????????????????????|?????????|????? ?????|??????????????????????????????????????????????????|?????????????? ??????????????P????? ?????????????????????P?|?????????|?????????????????????????????????????????????????????? ????????????Þ?????????????????????????????????|???|??????????????????????????? The text should instead look like this: A difference between statisticians and computer scientists in this field seems to be that when a statistician has an idea he or she writes a paper; a computer scientist starts a company. I had to type that text out manually. This is feasible for such a small quote, but how do I actually copy what I see? Is it something unusual about the PDF, the browser, the plugin, or some combiniation of the three?

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • Self-connecting printers

    - by Martin Cerny
    Hello, I work as an administrator in a small company using XP Professional on all computers and two servers with Win 2003 Server. Recently a very unusual problam occured one of the computers keeps connecting to all the printers on the network it doesn't matter if it's an administrator or Domain User as soon as somebody logs in the commputer connects all the printers. The printers are either installed on local computers or on the server and shared. There is no log-on script connecting the printers, I install them manualy and none of the other computers shows such behaviour. We have a printer which is installed on two computers and both of them share it (I'm moving it to Server from a small PC which shared it up to now, but some computers still use the old connection), meaning this specific computer connects to one of the printer two times and it can't use either of the connections. How to prevent this self-connecting to all printers (none of the other computers has this problem). If I delte them from the "Printers" folder everything works fine untill I reconnect and the Folder is once again full of all the printers we have. I solved the smaller problem, computer is now capable of printing on all of the printers (it seems there have been some registry issues), after cleaning the registry and reinstalling the printer it seems to work just fine. But the second thing prevails, the computer connects to all the printers in the network (when I remove one/multiple it is reconnected right after the next log-in by any user).

    Read the article

  • Want to install GDB on Fedora 7 machine..

    - by RBA
    Hi, I want to install GDB GNU debugger for debugging C Programs, onto my fedora machine.. I installed the gzip file from gnu website, but it gives error during MAKE command.. I am doing all the steps correctly which reading from the readme file, and tutorial on internet. Please guide. Also i am trying to do from yum install gdb command and sudo apt-get install gdb, yum is not found in my system, i installed it, but now it is giving some unusual error, some file missing.. So no success with this.. sudo apt-get is working, but it is also giving some following errorss... [oracle@localhost Programs]$ sudo apt-get install gdb Password: Sorry, try again. Password: Sorry, try again. Password: oracle is not in the sudoers file. This incident will be reported. [oracle@localhost Programs]$ sudo apt-get install gdb I am in real nead of this gdb tool.. how to go about it.. Please share your experiences over this.. Thankx..

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • Is there any way to force my Linux box to always boot up with a self-assigned IP address?

    - by Jeremy Friesner
    This is perhaps an unusual request: I'm trying to get a Debian Linux box to always give itself a self-assigned IP address (i.e. 169.254.x.y) on boot. In particular, I want it to do that even when there is a DHCP server present on the LAN. That is, it should not request an IP address from the DHCP server. From what I can see in the "man interfaces" text, there is an option for "manual", and an option for "dhcp". Manual assignment won't do, since I need multiple boxes to work on the same LAN without requiring any manual configuration... and "dhcp" does what I want, but only if there is no DHCP server on the LAN. (A requirement is that the functionality of these boxes should not be affected by the presence or absence of a DHCP server). Is there a trick that I can use to get this behavior? EDIT: By "no manual configuration", I mean that I should be able to take this box (headless) to any LAN anywhere, plug in the Ethernet cable, and have it do its thing. I shouldn't have to ssh to the box and edit files to get it working each time it is moved to a different LAN.

    Read the article

  • Nginx proxy to IIS Connection Timeout

    - by MitMaro
    I am having an issue with random timeouts with a Nginx proxy connecting to an IIS machine. I have been watching a packet capture between the two servers and it seems that the IIS machine is receiving a SYN packet but is not responding with what I think should be an ACK response. Before the timeout occurs there seems to be a slower response from the IIS server. There is no unusual memory or processor usage on the IIS or Nginx machine. Some information on the servers and setup: Nginx Machine: Ubuntu 10.04 64bit Nginx 0.7.65 Amazon EC2 Windows Machine: Windows Server 2008 IIS 7 ASP.net Application in Integrated Mode Nginx Error: 2011/01/10 17:57:40 [error] 8297#0: *30 connect() failed (110: Connection timed out) while connecting to upstream, client: 209.***.***.***, server: secure.example.com, request: "GET /a/path/deliver.aspx HTTP/1.1", upstream: "http://***.***.***.****:****//another/path/deliver.aspx", host: "secure.example.com" WireShark Packets 6521.449528 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422103 TSER=0 WS=7 6524.443239 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422403 TSER=0 WS=7 6530.443241 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477423003 TSER=0 WS=7

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • Windows: disable remote access of local drive, even by domain admin

    - by Matt
    We have a network of Windows 7 PCs that are managed as part of a domain. What we want is for the domain admin to be unable to view the PC's local drive (C:) unless he is physically at the PC. In other words, no remote desktop and no ability to use UNC. In other words, the domain admin should not be allowed to put \\user_pc\c$ in Windows Explorer and see all the files on that computer, unless he is physically present at the PC itself. Edit: to clarify some of the questions/comments that have come up. Yes, I am an admin---but a complete Windows novice. And yes, for the sake of this and my similar questions, it is fair to assume that I am working for someone who is paranoid. I understand the arguments about this being a "social problem versus a technical problem", and "you should be able to trust your admins", etc. But this is the situation in which I find myself. I'm basically new to Windows system administration, but am tasked with creating an environment that is secure by the company owner's definition---and this definition is clearly very different from what most people expect. In short, I understand that this is an unusual request. But I'm hoping there is enough expertise in the ServerFault community to point me in the right direction.

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

  • Windows Server 2012 licensing issue preventing RDP connections?

    - by QF_Developer
    I am witnessing an unusual behaviour on 1 of 5 Windows Server 2012 R2 machines (clean install) that is preventing any remote connections from being established via RDP. I have run through the prerequisites for RDP here but I am finding that any remote connection attempt instantly stops the "Windows Protection Service". When I check the event logs I see the following entry. The Software Protection Service has stopped Event ID: 903 Source: Security-SPP From what I have read Security-SPP is tasked with enforcing activation and licensing, it appears that RDP requires this service to be in the running state. Is it possible that I have inadvertently activated this instance of Windows with a key that has already been associated to another instance (We have 5 keys as part of an MSDN subscription)? Would this be sufficient to block RDP access? When I look under System Properties (Windows Activation) it states that Windows is activated and there are no other obvious indicators that there's a licensing issue. EDIT 1: I ran a Powershell script to display the product keys for all servers in order to check for any duplication. For the problematic server I am getting the message The RPC server is unavailable.

    Read the article

  • How is the extra mSATA SSD disk used/configured in a Dell XPS laptop?

    - by Mark
    Some machines in the new XPS laptop range from Dell come with a regular, large (500GB+) HDD and an additional 32GB m-SATA SSD. The only detail I can find about this extra drive on the Dell site is this: Store your important files, multimedia and photos with XPS 15’s large hard drive options. To get instant access to your media, choose an optional mSATA solid-state drive (SSD) that can boot up to twice as fast as a regular hard drive and resumes in less than 1 second. I'd like to know more about how this extra drive is set up and used, specifically: Is anything installed on it (e.g. OS files or a boot loader) or is it just used as swap space? Is the m-SATA drive visible as a lettered drive in Windows? (I'd guess not if it's used for swap file only.) Is this unusual configuration likely to cause any problems later down the line - e.g. when upgrading to Windows 8? As usual, Dell's sales team haven't been able to help. If anyone's actually got a Dell machine with this or a similar hard drive set-up and can give a definitive answer rather than speculation I'll accept the answer.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Server down at 23:26 every night

    - by miccet
    We are having a big problem with our sites stability the last couple of weeks and after endless hours of troubleshooting I don't get anywhere. So I turn to you dear community. Setup: 2 x VPS servers - Front end, 8 core, 8G RAM. - Database, 5 core, 3G RAM. Both running Ubuntu. Ruby on Rails EE with Passenger 3 and Rails 2.3.11. MySQL 5.1.67. The problem is that each night, at the exact same time (23:26) the SQL server suddenly shows a processlist full of COMMIT with an increasing Time. After 30-40 seconds (can go longer) a wave seems processed and the site responds for a few seconds before it repeats. During this hick up the database server load spikes while the front end is relaxing. I have looked at slow queries, but is not finding any locks or other unusual queries ran at this time. I have looked at iotop at the time of the halt and there is no activity from mysql. I also tried turning off query_cache and messed around with the mysql configuration file without much change. Any ideas?

    Read the article

  • Nginx and 1000 WordPress Installs - Optimization

    - by GTE
    Hey, I'm trying to create a rather unusual (imo) configuration where I have: nginx php-fastcgi mysql 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a seperate subdomain. Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog. This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag. 1) Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.) 2) Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one? 3) Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated. Thanks in advance

    Read the article

  • Loading not-so-well-formed XML into XDocument (multiple DTD)

    - by Gart
    I have got a problem handling data which is almost well-formed XHTML document except for it has multiple DTD declarations in the beginning: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> ... </head> <body> ... </body> </html> I need load this data into XDocument object using only the first DTD and ignoring the rest declarations. It is not possible to completely ignore DTD processing because the document may have unusual characters like &acirc; or &euro; etc. The text is retrieved from external source and I have no idea why it comes like this. Obviously my naive attempt to load this document fails with System.Xml.XmlException : Cannot have multiple DTDs: var xmlReaderSettings = new XmlReaderSettings { DtdProcessing = DtdProcessing.Parse XmlResolver = new XmlPreloadedResolver(), ConformanceLevel = ConformanceLevel.Document, }; using (var xmlReader = XmlReader.Create(stream, xmlReaderSettings)) { return XDocument.Load(xmlReader); } What would be the best way to handle this kind of data?

    Read the article

  • crystal reports : cannot determine the queries necessary to get data for this report

    - by phill
    I've recently come across the following error in one of my crystal reports following an accounting system update. Group #1: ? - A : This group section cannot be printed because its condition field is nonexistent or invalid. Format the section to choose another condition field. I've verified every database field being used to ensure it still exists for consistency and checked the formulas for that section. No dice. So to hopefully fix the problem, i remove the section using the Section expert. I run the same database checks. It then complains with the same error for Group #5. So i remove that as well. now I have a new and unusual error when i attempt to run 'Show Query'. the error is: "Cannot determine the queries necessary to get data for this report" I have tried to logon/logoff the database and verify database. No complaints until i try to run, show query. When I attempt to run the report, it also throws the same error. any ideas? Am I approaching this incorrectly? this is done in crystal reports 10. note: this report is run with the sql sa user to eliminate any permission issues.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >