Search Results

Search found 18422 results on 737 pages for 'down town'.

Page 531/737 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • What steps to take when CPAN installation fails?

    - by pythonic metaphor
    I have used CPAN to install perl modules on quite a few occasions, but I've been lucky enough to just have it work. Unfortunately, I was trying to install Thread::Pool today and one of the required dependencies, Thread::Converyor::Monitored failed the test: Test Summary Report ------------------- t/Conveyor-Monitored02.t (Wstat: 65280 Tests: 89 Failed: 0) Non-zero exit status: 255 Parse errors: Tests out of sequence. Found (2) but expected (4) Tests out of sequence. Found (4) but expected (5) Tests out of sequence. Found (5) but expected (6) Tests out of sequence. Found (3) but expected (7) Tests out of sequence. Found (6) but expected (8) Displayed the first 5 of 86 TAP syntax errors. Re-run prove with the -p option to see them all. Files=3, Tests=258, 6 wallclock secs ( 0.07 usr 0.03 sys + 4.04 cusr 1.25 csys = 5.39 CPU) Result: FAIL Failed 1/3 test programs. 0/258 subtests failed. make: *** [test_dynamic] Error 255 ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz Running make install make test had returned bad status, won't install without force Failed during this command: ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz: make_test NO What steps do you take to start seeing why an installation failed? I'm not even sure how to begin tracking down what's wrong.

    Read the article

  • Massive Memory Leaks?

    - by Mads
    Hi, I seem to have huge memory leaks, which are confusing me. I'm running fusion 3.1 / Windows 7 on Snow Leopard. It's a clean install with all upgrades applied. I've given fusion 8GB on a 14GB machine. I've installed VS2008 & Eclipse in Windows 7. Nothing unusual. Inside Task Manager in Windows 7, my memory footprint stays reasonable, at <2GB. But in OSX, Activity Monitor shows the footprint of vmware-vmx to be much larger. It starts at 2 GB, which seems fine, but whenever I'm actually doing anything in Windows, vmware-vmx's footprint grows at a few MB per second. After 20 mins or so it's using ~10GB and everything grinds to a halt. Throughout this, Task Manager still says I'm only using 2GB. And whatever I do in windows seems to increase vmware-vmx's memory footprint. Even closing down an application seems to make it go up. So is this par for the course in fusion? I was previously using parallels 3 / Vista under Leopard, and it worked fine. I'd assumed my new fusion config would work better, but this makes it completely unusable. (And apparently I can't even ask tech support unless I buy a support package...) Any advice much appreciated. Thanks

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • Windows 7 restarts while being idle

    - by Ondrej Slinták
    My Windows 7 almost always restarts when I keep it idle for ~20-30mins. It happened randomly before, but lately, if I leave the computer I can be sure it's gonna restart after those 30mins. It never happens when I play games or work tho, just when it's idle. It's a fresh install of Windows 7 64bit. I had also problems while installing it, it always crashed while finalizing the install and I had to reinstall again. Eventually it installed on 3rd or 4th try after I deleted all of my partitions and added them again. I thought it might have been a hardware problem, but temperatures seem to be okay and I have no idea how to track what might have been causing it. Any ideas? I'm running Windows 7 64bit on: Gigabyte EX58-UD4P Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz NVIDIA GeForce GTX 260 6GB of DDR3 1066Mhz RAM WDC WD1001FALS-00J7B0 1TB SATA II I have a very bad feeling it might be something with HDD and its compatibility with Windows 7 as I haven't had those problems for 1 year while I had Vista. Edit: I checked Event Viewer critical errors from this night. PC restarted first time at 11:12pm, then at 3:06am and since then every ~20min until I came back to it. Error message is: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Source: Kernel-Power

    Read the article

  • Setting up a Network Bridge on Linux VM (Windows 7 Host)

    - by GrandAdmiral
    I would like to use NetEm to simulate a low bandwidth environment while testing an Internet-connected device. My plan is to setup a bridge in a Linux VM (Linux Mint 13) on a Windows 7 host. Unfortunately I'm having trouble setting up the bridge. Then I can use NetEm in the Linux VM to limit the bandwidth to an external device. I went with the following script: ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up Then create the bridge and bring it up: brctl addbr br0 brctl setfd br0 0 brctl addif br0 eth0 brctl addif br0 eth1 dhclient br0 ifconfig br0 up When I run that script, I see the following warning: Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service smbd reload Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the reload(8) utility, e.g. reload smbd The device connecting to the bridge is able to obtain an IP Address, but it can only ping the IP Address of the bridge (both are 10.2.32.xx). Then after a few minutes, other parts of our network go down. I'm not sure why, but once I kill the bridge the network is fine. Is it possible to setup a network bridge in a Linux VM? Do I need to do something else with the dhclient br0 part of the script? By the way, I'm using VirtualBox. The wired connection is eth0 and the wireless connection is eth1. The wired connection is connecting to the device and the wireless connection is going to the network. Both adapters are set up as bridged adapters with promiscuous mode set to "allow all".

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Asus K55VM usb 3.0 issue

    - by user2141481
    Good day superusers! I own the above laptop and I found out that there are some unknown and unusual issues with usb 3.0 ports. I haven't noticed anything strange until now. I got a new toshiba usb 3.0 external hdd and when I try to copy larger amount of data from my disk to the external hdd, the OS(windows 7) randomly starts ignoring the external hdd. It doesn't shut it down, it kinda just stops responding but the light on the hdd is still lit. I get an error that the files cannot be copied. I have reinstalled windows 7, installed all drivers(including intel chipset drivers of course) and the issue is still present. It acts normal when copying small amount of data. Also, I heard that some intel chipsets have an issue with usb, something about the connectors not transferring power when the usb device enters some kind of "low power mode" causing the device to stop responding and you need to plug it out and in again. But the thing is, my Intel® Chief River Chipset HM76 is not on the list of affected hardware(not ENTIRELY sure though). If anyone has any idea of what the problem to this might be, I'd be greatful. Edit: The hdd works perfectly fine even for large amounts of data if plugged in the usb 2.0 port!

    Read the article

  • Is there a way to exclude a specific drive vdi from "snapshots" in VirtualBox?

    - by Graza
    ...or is there another space-efficient way of dealing with the page/swap file of the Guest O/S? I've realised that its quite possible/likely that one of the things which "bloats" the snapshot/diff vdi's when a snapshot is taken is the guest operating system's pagefile. For example, say I have a 2Gb swap-file in a Windows guest OS, and over the course of a few weeks the usage of the swap file has gone over 1Gb a couple of times. When I next create a snapshot, it seems likely that I'd be almost guaranteed around 1Gb of space taken up in the new differencing disk just because of changes in the swap file. Obviously (provided I never did "live" snapshots on running or paused machines, and only ever did them when the machine was shut down), I would not need any of the information in the swap file to be saved. So this would simply be a waste of 1Gb. I'm wondering if there's a way to attach a vdi to a VM and flag it as "exclude from snapshots" - which would mean I could put the swap file on a different vdi which would never be included in a snapshot. Or if anyone has any other suggestions. Or an explanation about why it might not be an issue. I could obviously delete and recreate a swap drive vdi every time I did a snapshot to achieve the same effect, but this is a little more effort than simply clicking "create snapshot"....

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Why is Apache htdigest authentication failing in IE10 on Windows 8?

    - by Kevin Fodness
    One of our developers reported that for the past week or two, the htdigest authentication that we have set up on our test sites in Apache is not working in IE10 on Windows 8. It's fine on IE10 on Windows 7, and it's fine on Chrome on Windows 8. The specific behavior is: Navigate to site with htdigest authentication enabled, username and password form pops up, enter correct username and password, and the username and password box pops up again. Potentially useful information: All patches applied on Windows 8 box No additional software on Windows 8 box other than Outlook 2013 and a browser test suite (Chrome, Firefox, Opera, Chrome Canary, Opera Next) Win8 running in a virtual machine on Xen Same behavior can be replicated on Win8/IE10 on Browserstack.com Server running Ubuntu 10.10 with Apache 2.2.16 This feels like a patch was applied to the Windows box that broke digest authentication for IE10 on Win8 (box configured for automatic updates). However, without knowing a specific date I can't necessarily nail this down. Has anyone else experienced this problem? EDIT: This problem only happens in the "Metro" interface, not when running IE10 in desktop mode. As of a few weeks ago, it worked fine even in the "Metro" interface.

    Read the article

  • Why might my Fedora 15 live USB persistent storage not work?

    - by Richard J Foster
    I created a Fedora 15 "live" USB stick using the live USB creator found at https://fedorahosted.org/liveusb-creator/ and the Fedora 15 i686 Desktop ISO image with the persistent storage space set to 4096MB. (The USB stick I have available has an 8GB capacity, so there should be plenty of space.) Fedora appears to boot correctly, however it seems that the persistent storage is not working. To verify this, I opened a terminal prompt, then did su - followed by yum update yum. As expected, I was informed that a new version was available. (The live CD contains version 3.2.29-4, at the time of typing 3.2.29-6 is the current version). After installing, I verified that the new version was installed by typing yum --version. I then shutdown the system using shutdown now. After the system had shut down, I rebooted and returned to the terminal prompt. On typing yum --version, I was informed that the version was 3.2.29-4 (i.e. the original version). Why might the persistent storage not be working? Is there anything I can do to fix it?

    Read the article

  • Numbering grouped data in Excel

    - by Jeff
    I have an Excel spreadsheet (2010) with data similar to this: Dogs Brown Nice Dogs White Nice Dogs White Moody Cats Black Nice Cats Black Mean Cats White Nice Cats White Mean I want to group these animals but I only care about species and color. I don't care about disposition. I want to assign group numbers to the set as shown here. 1 Dogs Brown Nice 2 Dogs White Nice 2 Dogs White Moody 3 Cats Black Nice 3 Cats Black Mean 4 Cats White Nice 4 Cats White Mean I was able to select all the species and colors, then from the data tab select 'advanced', then 'unique records only'. This collapsed the data so that I could number the visible rows. Then when I 'cleared' the filter I could easily just fill the blank areas under the numbers with the number above. The problem is that my real data has far too many rows for this to be practical. Also, the trick about entering 1 in the first cell, 2 in the cell below, selecting both then dragging the corner down to 'auto-number' doesn't seem to work when you're viewing filtered rows. Any way to do this?

    Read the article

  • Intermittent 5.7.1 email bounce to Exchange 2007

    - by Steve Kennaird
    My knowledge of Exchange isn't particularly great, so excuse me if some of the terminology I use isn't quite right. I'm primarily a web developer who's now responsible for a small business's network. We have a server running SBS 2008 and Exchange 2007. Generally, everything works well, emails are able to be sent to both internal and external domains without issue. We've only got ~20 users, Exchange is sitting on a single server. I use SendGrid to send emails generated by our externally hosted website to users in the office. Primarily, order notifications are sent to [email protected]. Without any pattern and less than once per week on average, an email to [email protected] will bounce back, and the logs on SendGrid detail the following error: 550 5.7.1 Unable to relay for [email protected] Either side of that failed delivery attempt, I'm able to send and receive emails to/from [email protected]. Having done some research, incorrect reverse DNS seems like it could be a cause of intermittent bounces like this. Having used nslookup, I have found that the reverse DNS doesn't map like it should, e.g. Office IP: 135.325.351.123 (made up IP, for example only) Domain: office.somedomain.com (made up, for example only) Reverse DNS: somedomain.gotadsl.co.uk (half made up) Could this be a cause? I'm sure that the IP address and the domain should map to each other. Also, it has been suggested to me that as the Exchange server is on a network with an ADSL connection, that could be a potential cause as the connection "goes up and down all day long". I don't have an opinion on this, as I don't have enough knowledge of Exchange/ADSL to form a reliable opinion. Can anyone offer any insight as to whether one or both are actually potential causes, or if there is another possible cause?

    Read the article

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Access Denied on Some Subfolders/Files Within a Share

    - by Tim
    First thing this morning, I find that users on one of our share drives are all getting "access denied". I tried the same drive and also received "access denied" as a Domain Admin. Previous to this, all specified users and admins could get access. I checked share permissions I checked NTFS permissions I temporarily made both types of permissions read/write to "Everyone" -- This worked for one user It turns out that this is occurring for only some files/folders. When I try to manually alter the share of that single share, it can't be shared, access denied. xcacls also gets access denied rebooted the server (not a big deal - this is a smallish company). Does anybody have any insight, my google-fu is coming up blank. Thanks. EDIT: More info, I just ran AccessEnum. There were a lot of "access denied", but I noticed the pattern that all of the access denied had a parent with an owner of "???". When I look at the properties, the "Unable to display owner" message is in the box and I can only make my user account the owner. I can then share the individual file/folder, but it doesn't seem to propogate down to subfolders/files.

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • Can I use a Windows Server 2003 Domain Controller but my home router for DNS?

    - by NetworkingWannabie
    Hi All Probably easiest to start with a description of my current setup, which works (oh, and this is a home setup not an office or anything): I have an ADSL modem with a static IP address (192.168.128.1), and its DHCP capability is disabled. I have a permanently powered up Windows Server 2003 machine with a fixed IP (192.168.128.2) which provides my domain controller, dhcp, and dns. The default gateway for everything is my ADSL modem everything is setup to use the WS2003 machine as the primary DNS with the ADSL modem as Secondary DNS just in case the server goes down (everything includes the server itself). Lastly, just in case it's relevant, I have my DHCP leases set to infinite (or whatever the right term is). Everything is pretty hunky dory. Except, that is, for the fact that my server is ALWAYS on, and it isn't always used, so I'm burning juice that I don't need to - my server burns around 120W which isn't immense but isn't irrelevant either, so I'd like to put it into a stand-by state when it isn't being used (the more standby the better) and then get the clients to wake it up. Am I correct in assuming that this won't work at the moment - A given client would need an IP address to wake the machine up, and it needs to machine to be awake to get an IP - catch 22? Assuming I'm correct, can I move to using my router (which is always on) for DHCP? What impact will this have on DC and DNS? Alternatively, does anyone have a better way for me to achieve this? Can I get the server to wake up when it sees clients look for a DHCP server, etc? Wow, that came out longer than expected! Thanks for your help.

    Read the article

  • Can Current Backflow from Powered Hub's Adapter & cause PC Damage?

    - by SuperUserMan
    Getting this short: Can current flow from a powered USB hub's power adapter (lying 10 Meter away) back to computer via usb port and cause damage to Computer components like mobo, etc? What should be my concerns? Using a 2 Amp 5V Power adapter to power a 10m Long Active Repeater USB extension cable with 4 port HUB & plugging into PC's Front port, causes PC Chassis fan to keep running (thought slower than regular speed) Front Chassis HDD & power LED to turn on (though bit dim) may be other things which i cant detect/see at chip level, in motherboard?? All this even after PC is shut down (bit scary) More detail (in case still want to read): To run 4 High power (needing 450 mAmps) Wifi Adapters, far away from PC, Bought Active Repeater USB Extension Cable with 4 Ports & power port at far end http://www.ebay.com/itm/33FT-USB-2-0-Male-to-Female-Extension-Cable-Hub-Splitter-Adapter-with-4-USB-Port-/390846115254 Then added a locally bought 2 Amp 240V AC to 5V DC Power Adapter and plugged into USB hub which is a part of & situated at far end of a 10 Meter Active Repeater usb extension cable. Even 4 Wifi Adapters run fine (appear to) using this setup, but running chassis fan, dimly lighted Power & HDD LED, even when PC is switched off is bit scary and surely mean 5V & some current is flowing all though that 10 meter extension cable into my USB port & powering stuff. Can this cause damage? and what should be my concerns. Of course I can't switch off the power adapter (lying 10 meters away from PC) every time I switch off my PC to prevent this.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >