Search Results

Search found 28682 results on 1148 pages for 'drop down menu'.

Page 628/1148 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Arch Linux drops me on my school network.

    - by Kravlin
    I'm running a Lenovo X61 which i carry around my college for getting on the internet at various points in the day. The network has always been finicky but recently it's gotten worse. I'll connect using iwconfig, get an ip from dhcpcd and log in using vpnc to their system. Sometimes I'll stay connected for hours but most of the time within 30 seconds my network traffic will drop to zero and i'll be unable to do anything. My computer still belives it's connected, however to try again i need to put my wireless interface down, put it back up and try again. It's gotten so bad that i've got a window on my computer pinging yahoo or google constantly in order to know if i'm still able to get online. I know other people who have used Arch Linux that don't have the same problems as well as people who use Ubuntu who haven't had any problems either. It seems like my computer is a special case. Does anyone have any suggestions on how to fix it? dmesg doesn't show anything out of the ordinary going on and i don't know where else to look for errors or other things to try.

    Read the article

  • Apache log rotation: logrotate vs rotatelogs vs chronolog

    - by Enrico
    I have been researching log rotation for my server which hosts ~5 fairly high traffic sites. From what I can tell, my options are to use logrotate or to use piped logging with either rotatelogs or chronolog. logrotate requires a restart of apache and both SIGHUP and SIGUSR1 restarts are less than ideal on high traffic sites, because either you drop a bunch of connections or you need to delay compressing the old log until all child processes have died naturally. Also, downtime can be quite significant if compression is enabled. Would using logrotate - without compression and with graceful restart - and compressing old logs after the fact be the best way to minimize downtime? chronolog and rotatelogs sound promising, but are not well documented. I couldn't find examples of using either in combination with vhost specific logs. The chronolog website says, "when the expanded filename changes, the current file is closed and a new one opened". Is this globally? Or is that per AccessLog, CustomLog or ErrorLog directive? Is there a significant difference between chronolog and rotatelogs?

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Make Thundirbird use and sync with Gmail Spam facility

    - by Senthil
    I am using Thundirbird 3.1.7. I am using Gmail with it. When I mark a message as spam in Thunderbird, I want it to go to Gmail's spam folder. But it is not happening. The mail is only marked as spam and stays in Gmail's inbox. How can I setup Thunderbird so that when I mark a message as spam in thundirbird, it goes to spam folder in both Thunderbird and Gmail? FYI: I have disabled Thunderbird's own adaptive spam controls so that it doesn't interfere. I am perfectly fine with Gmail's spam facility and don't want Thunderbird to add another layer of spam filtering. Still Thunderbird doesn't send emails to Gmail's spam. Update I figured that dragging and dropping the email to the spam folder in Thunderbird does the job. But is it the same as marking the email as spam in Gmail's interface? i.e. Gmail will do stuff when you mark an email as spam so that it can filter out similar messages in the future. Does it happen when I do this drag and drop thing too? Are they the same?

    Read the article

  • Mediawiki extension error

    - by vinylguitar
    I'm running the latest version of mediawiki using MoWeS Portable II from my desktop. I just installed this extension on the wiki http://www.mediawiki.org/wiki/Extension:MsUpload It adds an option to upload files (to be embedded in an article) to the edit screen of an article. After installing it when I try and edit an article I get the following error: Fatal error: Call to undefined method OutputPage::addModules() in C:\Users\User\Desktop\knowledge mapedia 10 25 13 copy\mowes_portable\www\mediawiki\extensions\MsUpload\msupload.php on line 65 Also here is what I posted in the localsettings.php file (I put it in at the end of localsettings.php if it makes a difference): Start --------------------------------------- MsUpload $wgMSU_ShowAutoKat = false; #autocategorisation $wgMSU_CheckedAutoKat = false; #checkbox for autocategorisation checked $wgMSU_debug = false; #debug mode $wgMSU_ImgParams = '400px'; #default max-size for inserted image $wgMSU_UseDragDrop = true; #show drag&drop area require_once "$IP/extensions/MsUpload/msupload.php"; End --------------------------------------- MsUpload require_once "$IP/extensions/msupload/msupload.php"; At line 65 in the localsettings.php file there is the following: line 64 ## Database settings line 65 $wgDBtype = "mysql"; line 66 $wgDBserver = "localhost"; line 67 $wgDBname = "mediawiki"; line 68 $wgDBuser = "root"; line 69 $wgDBpassword = ""; Any idea what I'm doing wrong?

    Read the article

  • Is there a way to tell what the download speed is from a site/server

    - by Memor-X
    i'm looking into which ISP i should go with at the place i'm moving into, one ISP which i have been told good things about has data limits (which when breached will drop your speed to dial-up speed) but multiple memberships which, apart from the cheapest membership, have the same data limits (the cheapest has a 10GB data limit) in their fine print, they say that each different membership has different port speeds, one particular part jumps out at me These speeds are the NBN (National Broadband Network) port speed and not the actual Internet data speed which will vary based on numerous factors including destination you are reaching, your network equipment, network congestion etc. i plan to use the net to download DLC and patch updates for games (particular the insanely large update for the Wii U) and games from Steam (if i find any good one other than this one JRPG) and downloading development resources from free sites like Deposit Files and Mediafire since one membership with a 1000GB data limit is $145 with the port speed being 12Mbps/1Mbps (cheapest) while another with the same limit is $190 with the port speed of 100Mbps/40Mbps (expensive) i am wondering how i can tell what the speed coming from site is since i don't want to be wasting money on speed that makes no difference (unlike memory which i rather have to spare) NOTE: the speeds are for a fiber optic network which where my new place is can only connect via fixed wireless which i may not be able to get with this ISP but if i can get this network then good NOTE 2: most of the resources i get from Deposit Files are always about 200 MB or less, if a resource pack is greater then it's split into multiple archives (like .7z.part) while Mediafire i have to see one bigger than 150MB NOTE 3: one update patch for a PS3 game is close to 4 GB (Disgaea 4) which i need to get access to the DLC and on the weekend i downloaded 5 GB for the Final Fantasy XIV Open Beta for the PS3 which took almost 5 hours

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • Authentication required by wireless network.

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Windows 2003 DNS or IIS6 Problem?

    - by Mario
    Weird DNS problem... We have an intranet located internally on a windows 2003 / iis6 server - DNS handled internally on another windows 2003 server. The intranet, amongst other functions, hosts a ecommerce store I wrote that sells nike apparel embroidered with our company logo. Up until recently, it would send an email to payroll and the cost would be deducted from the employees paycheck. lets say this store is located at http://mydomain.com (only available internally) Now, we've been told by the accountants that we can no longer auto deduct from payroll and the employee needs to pay with a credit card or cash. So i went to thawte.com and ordered an SSL cert to be on the safe side (even though the CC gateway is secure) and they told me i need to drop the .com from the domain name Not wanting to mess with a system thats perfectly functional, i created another DNS entry that just points to mydomain (no .com) and left the old one in there. so they would go to http://mydomain On my Mac (OS X 10.6) i can hit either one just fine On Windows XP / Windows XP Embedded or Windows 7 (the vast majority of the pc's on our network) http://mydomain - returns nothing http://mydomain.com still works https://mydomain.com works but says the cert is invalid (as it should, it was issued to mydomain - not mydomain.com) my question is: why does it work on my Mac and not on a Windows PC (i get dhcp and dns just like any other pc on the network) and will removing the .com one from the DNS server resolve this? I've done all the usual attempts - ipconfig /flushdns, ipconfig /renew and release even going so far as to stop and restart DNS client on my Windows 7 box; rebooting and shutting down - adding a regedit entry something along the lines of SecureResponses and rebooting nothing works... I think its the .com and the not conflicting in DNS but i'm not sure - and why not on OS X We're closed on sunday and i'm going to remote in and see what happens if i remove the .com from DNS but any other ideas? -Mario

    Read the article

  • Persistent static route stops working after VPN drops and reconnects

    - by user76157
    I've got a VPN between two networks, one home and one office (A and B). Their subnets are: (A) 192.168.1.0 and (B) 192.168.0.0 The two networks have identical ADSL routers. Unfortunately these can only do dial-out VPN. So I've got a Windows 2008 server on Network B acting as a VPN server (ServerB). Network A's router (RouterA) passes through Network B's router and connects via PPTP to ServerB. RouterA is assigned the static IP 192.168.0.40 on Network B. There's a persistent static route on ServerB telling it to use 0.40 for all requests to Network A's subnet, 192.168.1.0. (route -p add 192.168.1.0 mask 255.255.255.0 192.168.0.40). This enables ServerB to ping all machines on A (and those machines to ping ServerB). The VPN connection occasionally drops (I'm not sure why - it's set to remain always on and seems to drop randomly). This wouldn't be too much of a problem, as it reconnects automatically and quickly, except that when it does reconnect, the static route on ServerB no longer works. Route print (on ServerB) shows that the persistent static route still exists. However a tracert to a machine on Network A doesn't use the static route; it tries instead to use ServerB's default gateway (which is RouterB), and fails to find the machine. Deleting and re-adding the static route fixes the problem - a tracert uses the static route. At the moment, a batch file to delete and re-add the static route is scheduled to run every day. But this is clearly far from an ideal solution! I hope that's not too confusing. Any help would be very much appreciated.

    Read the article

  • Having problems with high CPU usage and apparent memory leak of Exim

    - by Dancrumb
    I'm having problems with my server and am hoping you can help. The culprit appears to be exim. The CPU usage is consistently high and the memory usage trends up and up and up for no apparent reason (this is not a heavily used server). To demonstrate the issue, I ran the following: root@server [/var/log]# service exim restart; for iter in `seq 0 9`; do date; top -n1 | grep exim; sleep 10; done Shutting down exim: [ OK ] Shutting down spamd: [ OK ] Starting exim: [ OK ] Sun Jun 6 18:12:07 CDT 2010 62592 root 25 0 11400 6572 2356 R 51.5 1.3 0:00.92 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim Sun Jun 6 18:12:18 CDT 2010 62592 root 25 0 28768 23m 2356 R 57.4 4.6 0:06.75 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:12:28 CDT 2010 62592 root 25 0 36408 30m 2356 R 55.5 6.0 0:12.59 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:12:39 CDT 2010 62592 root 25 0 41396 35m 2356 R 53.5 7.0 0:18.35 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:12:49 CDT 2010 62592 root 25 0 45868 40m 2356 R 47.5 7.8 0:24.06 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:00 CDT 2010 62592 root 25 0 50056 44m 2356 R 55.3 8.6 0:29.84 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:10 CDT 2010 62592 root 25 0 53888 47m 2356 R 55.2 9.4 0:35.63 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:21 CDT 2010 62592 root 20 0 56920 50m 2356 R 55.3 9.9 0:41.15 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:31 CDT 2010 62592 root 25 0 60380 54m 2356 R 53.4 10.6 0:46.98 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:42 CDT 2010 62592 root 22 0 63400 57m 2356 R 49.5 11.2 0:52.74 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim After some time, it gets to a rate of picking up an extra MB every 10s. I've checked the exim logs and there are no messages coming in there. exim -bV shows: Exim version 4.69 #1 built 16-Mar-2009 14:44:43 Copyright (c) University of Cambridge 2006 Berkeley DB: Sleepycat Software: Berkeley DB 4.2.52: (February 22, 2005) Support for: crypteq iconv() IPv6 PAM Perl OpenSSL Content_Scanning Old_Demime Experimental_SPF Experimental_SRS Experimental_DomainKeys Lookups: lsearch wildlsearch nwildlsearch iplsearch dbm dbmnz passwd Authenticators: cram_md5 dovecot plaintext spa Routers: accept dnslookup ipliteral manualroute queryprogram redirect Transports: appendfile/maildir autoreply pipe smtp Size of off_t: 8 Configuration file is /etc/exim.conf I'm at something of a loss as to how to proceed. Any recommendations would be well received!

    Read the article

  • mysql startup, shtudown and logging on osx

    - by Joelio
    Hi, I am trying to troubleshoot some mysql problems (I have a table I cant seem to delete or drop, it hangs forever) I have 10.5.8 osx, I dont remember how/if I installed mysql, here is what I know: it automatically starts on boot the process looks like this: /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --pid-file=/usr/local/mysql/var/Joels-New-Pro.local.pid _mysql 96 0.0 0.0 75884 684 ?? Ss Sat06PM 0:00.02 /bin/sh /usr/local/mysql/bin/mysqld_safe when I run: /usr/local/mysql/libexec/mysqld --verbose --help it says: /usr/local/mysql/libexec/mysqld Ver 5.0.45 for apple-darwin9.1.0 on i686 (Source distribution) it seems to use my.cnf from /etc/my.cnf Now here are my questions: I dont see anything in the startupitems that remotely looks like mysql ls /Library/StartupItems/ BRESINKx86Monitoring ChmodBPF HP IO HP Trap Monitor Parallels ParallelsTransporter 1.) So how does it startup automatically? 2.) How do I start & stop this type of installation? Also, looking at the config, the logs have no values: /usr/local/mysql/libexec/mysqld --verbose --help|grep '^log' log (No default value) log-bin (No default value) log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 3.) Does that mean there is no logging enabled in mysetup? thanks in advance! Joel

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • laptop motherboard "shorts" when connected to adapter

    - by Bash
    Disclaimer: I'm sort of a noob, and this is a long post. Thank you all in advance! summary: completely dead laptop with no signs of life whatsoever (suddenly, for no apparent reason) Here's the deal: Lenovo Y470 (only a few months old with no water or shock damage). It stopped working suddenly (no lights, no sound, even when connecting adapter with or without battery). I tried a different adapter (same electrical rating), but no luck. I disassembled the thing completely, and tried plugging in the adapter and looking for signs of life with all different combinations of components installed (tried all combinations of RAM, CPU, USB power cords, screen, etc plugged in). no luck. Then, I noticed (as I was plugging in the adapter to try for the millionth time) that there was a "spark" for an instant when I first connect the adapter to the power jack. The adapter's LED would then flash (indicating it isn't working or charging). So, I thought the power jack has a short of some sort (due to bad soldering or something). Scanned virtually every single component on the motherboard, and tested the power jack connections with a multimeter. No shorts or damage to anything on the entire motherboard. Now I'm thinking I need to replace the motherboard. But, my actual question: What does this "shorting" when connecting the adapter signify? (btw, the voltage across the power connections and current through it drop to virtually zero when the adapter is connected and "sparks", and they stay that way). The bewildering thing is that there are no damaged components, and the voltage across adapter terminals returns to normal after I disconnect it (so it's not damaged). Please take a look at the pictures (of the motherboard's power connection and nearby components) and see if I'm missing something completely obvious... Links to pictures and laptop and motherboard model: pictures on DropBox Motherboard model: LA-6881P Laptop model: Lenovo IdeaPad Y470

    Read the article

  • Sharing large (multi-Gb) files with clients

    - by Tim Long
    I wasn't sure if this was the best place for this question, but I think it is squarely in the realm of the IT admin so that's the reason I put it here. We need to share large files (several Gigabytes) with external clients. We need a simple way of reliably and automatically publishing these files so that clients can then download them. Our organization has Windows desktops and a Windows SBS 2011 server. Sharing from our server is probably suboptimal from the client's perspective, because of the low upstream bandwidth of typical ADSL (around 1 Mbps) - it would take all day (9 hours for a 4Gb file) for the client to download the file. Uploading to a 3rd party sever is good for the client but painful for us, because we then have to deal with a multi-hour upload. Uploading to a third-part server would be less problematic if it could be made reliable and automatic, e.g. something like a Groove/SharePoint Workspace, simply drop the file in and wait for it to synchronize - but Groove has a 2Gb limit which is not big enough. So ideally I'd like a service with the following attributes: Must work for files of at least 5Gb, preferably 10Gb Once the transfer is started, it must be reliable (i.e. not sensitive to disconnections and service outages) and completely automatic Ideally, the sender would get a notification when the transfer completes. Has to work with Windows based systems. Any suggestions?

    Read the article

  • Error applying iptables rules using iptables-restore

    - by John Franic
    Hi I'm using Ubuntu 9.04 on a VPS. I'm getting an error if I apply a iptables rule. Here is what I have done. 1.Saved the existing rules iptables-save /etc/iptables.up.rules Created iptables.test.rules and add some rules to it nano /etc/iptables.test.rulesnano /etc/iptables.test.rules This is the rules I added *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22- j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT After editing when I try to apply the rules by iptables-restore < /etc/iptables.test.rules I get the following error iptables-restore: line 42 failed Line 42 is COMMIT and I comment that out I get iptables-restore: COMMIT expected at line 43 I'm not sure what is the problem, it is expecting COMMIT but if COMMIT is there it's giving error. Could it be due to the fact i'm usin a VPS?My provider using OpenVZ for virtualizaton.

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • How to determine which ports are open/closed on a FIREWALL?

    - by Rahl
    It seems no one has asked this question before (most regard host-based firewalls). Anyone familiar with port scanning tools (e.g. nmap) knows all about SYN scanning, FIN scanning, and the like to determine open ports on a host machine. Question is though, how do you determine the open ports on a firewall itself (disregard whether the host you're trying to connect to behind the firewall has those particular ports open or closed). This is assuming the firewall is blocking your IP connection. Example: We all communicate with serverfault.com through port 80 (web traffic). A scan on a host would reveal port 80 is open. If serverfault.com is behind a firewall and still allows this traffic through, then we can assume the firewall has port 80 open also. Now let's assume the firewall is blocking you (e.g. your IP address is under the deny list or is missing in the allowed list). You know port 80 has to be open (it works for appropriate IP addresses), but when you (the disallowed IP) attempt any scanning, all port scan attempts on the firewall drop the packet (including port 80, which we know to be open). So, how might we accomplish a direct firewall scan to reveal open/closed ports on the firewall itself, while still using the disallowed IP?

    Read the article

  • Software for handling camera RAW-files

    - by Eikern
    I use a digital SLR as most other photographers do today and have quickly realised that capturing images using camera-RAW files is the way to go. Personally I use Adobe Lightroom to handle my photo library, but I know there are other software available like Apple Aperture. These applications are quite hard to use for a novice, and are quite expensive too. I've often recommended other photographers to switch to camera-raw, but they won't do it because Windows can't handle it natively. Are there any free or cheaper applications out there that can do simple file handling and adjustments? Preferably so simple that my mom can do it. I know Nikon offers a codec that allows you to view NEF-files natively inside Windows, but still limits the uses of the file and slows the system down if the file is big. Does anybody know of a drag-and-drop application that converts camera-raw to JPG on-the-fly? In case I or someone would need to upload an image to the web or use it inside a word-document. Thanks.

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • Apache mod_jk Setting for Tomcat - workers.properties

    - by sissonb
    I am trying to direct files with .jsp extensions to tomcat. Otherwise I want apache to serve the file directly (no tomcat). Currently I have a test.jsp which is supposed to create an HTML page with the current date in the body. Instead when I go to that .jsp I see the JK Status Manager. The mod_jk.logs only show, init_jk::mod_jk.c (3365): mod_jk/1.2.35 initialized. I have tomcat and apache setup on my server. Apache runs on 80 and tomcat runs on 8080. localhost:8080 show the tomcat welcome page. I downloaded tomcat-connectors-1.2.35-windows-i386-httpd-2.2.x and copied the mod_jk.so to C:\apache\modules. Then I added LoadModule jk_module modules/mod_jk.so to my httpd.conf. I restart apache and the module loads just fine. Next I downloaded the mod_jk source to get the workers.properties file. I copy workers.properties to C:\apache\confg. Then I added this user, workers.tomcat_home="C:/Program Files/Apache Software Foundation/Tomcat 7.0" workers.java_home="C:/Program Files/Java/jdk1.7.0_03" worker.list=ajp13 worker.ajp13.port=8080 worker.ajp13.host=localhost worker.ajp13.type=ajp13 worker.ajp13.socket_timeout=10 When I try to use the ajp13 user in my httpd.conf I get the following error in my mod_jk.log, [Wed Mar 28 13:08:51 2012] [2196:4100] [info] ajp_connection_tcp_get_message::jk_ajp_common.c (1258): (ajp13) can't receive the response header message from tomcat, network problems or tomcat (127.0.0.1:8080) is down (errno=60) [Wed Mar 28 13:08:51 2012] [2196:4100] [error] ajp_get_reply::jk_ajp_common.c (2117): (ajp13) Tomcat is down or refused connection. No response has been sent to the client (yet) [Wed Mar 28 13:08:51 2012] [2196:4100] [info] ajp_service::jk_ajp_common.c (2614): (ajp13) sending request to tomcat failed (recoverable), (attempt=1) Next I update my httpd.conf with, JkWorkersFile C:/apache/conf/workers.properties JkLogFile C:/apache/logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " Also I added JkMount /*.jsp jk-status to my virtual host like this, <VirtualHost 192.168.5.250:80> JkMount /*.jsp jk-status #JkMount /*.jsp ajp13 ServerName bgsisson.com ServerAlias www.bgsisson.com DocumentRoot C:/www/resume </VirtualHost> I think i need to include a uriworkermap.properties file, but this is where I am getting stuck. I have put up a test .jsp at bgsisson.com/test.jsp It shows the JK Status Manager when I use JkMount /*.jsp jk-status and 502 Bad Gateway when I use JkMount /*.jsp ajp13 test.jsp <%-- use the 'taglib' directive to make the JSTL 1.0 core tags available; use the uri "http://java.sun.com/jsp/jstl/core" for JSTL 1.1 --%> <%@ taglib uri="http://java.sun.com/jstl/core" prefix="c" %> <%-- use the 'jsp:useBean' standard action to create the Date object; the object is set as an attribute in page scope --%> <jsp:useBean id="date" class="java.util.Date" /> <html> <head><title>First JSP</title></head> <body> <h2>Here is today's date</h2> <c:out value="${date}" /> </body> </html>

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >