Search Results

Search found 6136 results on 246 pages for 'quick and dirty'.

Page 100/246 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Ubuntu 9.10 Only Sees 244 MB RAM, while BIOS and Windows Sees 1.5 GB

    - by nicorellius
    I have 1.5 GB of RAM installed on an older Dell, Pentium 4. I just installed Ubuntu 9.1 and the system is only seeing 244 MB of RAM, even though there is 1.5 GB on the system. The BIOS sees all of it. I ran a Knoppix disc and it only saw 25 MB upon booting. I made no particular changes to the installation taht would affect this. I looked through the BIOS and the only setting I could see was the AGP aperture. Not even sure what this is. Anyone know where I went wrong? I also tried moving the memory modules around on the board. Booted with the 1 GB stick, still saw 244 MB. NOTE - This same system, except for the hard drive, had Windows XP running on it. The user who ran it said that the RAM was good and always showed 1.5 GB. Here is sudo cat /proc/meminfo MemTotal: 250064 kB MemFree: 3832 kB Buffers: 13356 kB Cached: 52216 kB SwapCached: 19676 kB Active: 91504 kB Inactive: 113884 kB Active(anon): 60572 kB Inactive(anon): 82156 kB Active(file): 30932 kB Inactive(file): 31728 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 250064 kB LowFree: 3832 kB SwapTotal: 4883720 kB SwapFree: 4781204 kB Dirty: 496 kB Writeback: 720 kB AnonPages: 123796 kB Mapped: 23368 kB Slab: 17248 kB SReclaimable: 7932 kB SUnreclaim: 9316 kB PageTables: 5304 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5008752 kB Committed_AS: 740372 kB VmallocTotal: 770600 kB VmallocUsed: 26008 kB VmallocChunk: 662544 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 114128 kB DirectMap4M: 147456 kB

    Read the article

  • Debian's Wordpress with broken plugin path?

    - by Vinícius Ferrão
    I've installed an Wordpress from Debian Wheezy package system and the plugins folder appears to be broken. As stated in the error log files of Apache2: [error] File does not exist: /var/lib/wordpress/wp-content/plugins/var The plugins are looking for an URL based on the full path, and not on the relative path. I can "temporary fix" the problem making a symbolic link to /var on the plugins folder, but I know that this is wrong and dirty. I don't know where to start debugging this. So any help is welcome. Additional information: /etc/wordpress/htaccess # Multisites generated htaccess RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Apache2 Configuration File: <VirtualHost *:80> Alias /wp-content /var/lib/wordpress/wp-content DocumentRoot /usr/share/wordpress ServerAdmin [email protected] <Directory /usr/share/wordpress> Options FollowSymLinks AllowOverride Limit Options FileInfo DirectoryIndex index.php Order allow,deny Allow from all </Directory> <Directory /var/lib/wordpress/wp-content> Options FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost> Thanks in advance,

    Read the article

  • What used the linux memory? Low cache, low buffer, not a VM

    - by Jason
    First of all, yes, I have read LinuxAteMyRAM, which doesn't explain my situation. # free -tm total used free shared buffers cached Mem: 48149 43948 4200 0 4 75 -/+ buffers/cache: 43868 4280 Swap: 38287 0 38287 Total: 86436 43948 42488 # As shown above, the -/+ buffers/cache: line shows indicates the used memory rate is very high. However, from output of top, I don't see any process used more than 100MB of memory. So, what used the memory? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28078 root 18 0 327m 92m 10m S 0 0.2 0:25.06 java 31416 root 16 0 250m 28m 20m S 0 0.1 25:54.59 ResourceMonitor 21598 root -98 0 26552 25m 8316 S 0 0.1 80:49.54 had 24580 root 16 0 24152 10m 760 S 0 0.0 1:25.87 rsyncd 4956 root 16 0 62588 10m 3132 S 0 0.0 12:36.54 vxconfigd 26703 root 16 0 139m 7120 2900 S 1 0.0 4359:39 hrmonitor 21873 root 15 0 18764 4684 2152 S 0 0.0 30:07.56 MountAgent 21883 root 15 0 13736 4280 2172 S 0 0.0 25:25.09 SybaseAgent 21878 root 15 0 18548 4172 2000 S 0 0.0 52:33.46 NICAgent 21887 root 15 0 12660 4056 2168 S 0 0.0 25:07.80 SybaseBkAgent 17798 root 25 0 10652 4048 1160 S 0 0.0 0:00.04 vxconfigbackupd This is an x86_64 machine (not a common-brand server) running x84_64 Linux, not a container in a virtual machine. Kernel (uname -a): Linux 2.6.16.60-0.99.1-smp #1 SMP Fri Oct 12 14:24:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Content of /proc/meminfo: MemTotal: 49304856 kB MemFree: 4066708 kB Buffers: 35688 kB Cached: 132588 kB SwapCached: 0 kB Active: 26536644 kB Inactive: 17296272 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 49304856 kB LowFree: 4066708 kB SwapTotal: 39206624 kB SwapFree: 39206528 kB Dirty: 200 kB Writeback: 0 kB AnonPages: 249592 kB Mapped: 52712 kB Slab: 1049464 kB CommitLimit: 63859052 kB Committed_AS: 659384 kB PageTables: 3412 kB VmallocTotal: 34359738367 kB VmallocUsed: 478420 kB VmallocChunk: 34359259695 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB df reports no large consumption of memory from tmpfs filesystems.

    Read the article

  • Windows 2008 Unknown Disks

    - by Ailbe
    I have a BL460c G7 blade server with OS Windows 2008 R2 SP1. This is a brand new C7000 enclosure, with FlexFabric interconnects. I got my FC switches setup and zoned properly to our Clariion CX4, and can see all the hosts that are assigned FCoE HBAs on both paths in both Navisphere and in HP Virtual Connect Manager. So I went ahead and created a storage group for a test server, assigned the appropriate host, assigned the LUN to the server. So far so good, log onto server and I can see 4 unknown disks.... No problem, I install MS MPIO, no luck, can't initialize the disks, and the multiple disks don't go away. Still no problem, I install PowerPath version 5.5 reboot. Now I see 3 disks. One is initialized and ready to go, but I still have 2 disks that I can't initialize, can't offline, can't delete. If I right click in storage manager and go to properties I can see that the MS MPIO tab, but I can't make a path active. I want to get rid of these phantom disks, but so far nothing is working and google searches are showing up some odd results, so obviously I'm not framing my question right. I thought I'd ask here real quick. Does anyone know a quick way to get rid of these unknown disks. Another question, do I need the MPIO feature installed if I have PowerPath installed? This is my first time installing Windows 2008 R2 in this fashion and I'm not sure if that feature is needed or not right now. So some more information to add to this. It seems I'm dealing with more of a Windows issue than anything else. I removed the LUN from the server, uninstalled PowerPath completely, removed the MPIO feature from the server, and rebooted twice. Now I am back to the original 4 Unknown Disks (plus the local Disk 0 containing the OS partition of course, which is working fine) I went to diskpart, I could see all 4 Unknown disks, I selected each disk, ran clean (just in case i'd somehow brought them online previously as GPT and didn't realize it) After a few minutes I was no longer able to see the disks when I ran list disk. However, the disks are still in Disk Management. When I try and offline the disks from Disk Management I get an error: Virtual Disk Manager - The system cannot find the file specified. Accompanied by an error in System Event Logs: Log Name: System Source: Virtual Disk Service Date: 6/25/2012 4:02:01 PM Event ID: 1 Task Category: None Level: Error Keywords: Classic User: N/A Computer: hostname.local Description: Unexpected failure. Error code: 2@02000018 Event Xml: 1 2 0 0x80000000000000 4239 System hostname.local 2@02000018 I feel sure there is a place I can go in the Registry to get rid of these, I just can't recall where and I am loathe to experiement. So to recap, there are currently no LUNS attached at all, I still have the phantom disks, and I'm getting The system cannot find the file specified from Virtual Disk Manager when I try to take them offline. Thanks!

    Read the article

  • where is memory gone (no, not buffers or cache)

    - by Marki
    can anyone tell me where the memory is gone: (no, this time neither buffers nor cache) # free total used free shared buffers cached Mem: 3928200 3868560 59640 0 2888 92924 -/+ buffers/cache: 3772748 155452 Swap: 4192956 226352 3966604 top, sorted by memory, descending: top - 13:42:06 up 1 day, 3:47, 2 users, load average: 0.08, 0.12, 0.36 Tasks: 228 total, 1 running, 227 sleeping, 0 stopped, 0 zombie Cpu0 : 2.0%us, 4.0%sy, 0.0%ni, 90.1%id, 0.0%wa, 0.0%hi, 4.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni, 0.0%id,100.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3928200k total, 3868020k used, 60180k free, 2896k buffers Swap: 4192956k total, 226048k used, 3966908k free, 82068k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3863 root 20 0 902m 199m 3296 S 7 5.2 99:08.77 ndsd 21906 root 20 0 138m 9076 2988 S 0 0.2 0:00.02 sfcbd 2332 root 20 0 126m 4660 1332 S 0 0.1 0:17.72 mono 4243 wwwrun 20 0 683m 4468 668 S 0 0.1 0:07.38 java 2994 root 20 0 202m 2288 1660 S 0 0.1 6:10.02 httpstkd 4338 root 20 0 184m 2240 1112 S 0 0.1 0:00.52 namcd 21898 root 20 0 32368 1832 1256 R 1 0.0 0:00.08 top In fact, some time ago oom kicked in and crashed the system (kernel panic), and I'm afraid we're again not far from that point.... UPDATE # cat /proc/meminfo MemTotal: 3928200 kB MemFree: 51336 kB Buffers: 2964 kB Cached: 72876 kB SwapCached: 29128 kB Active: 233440 kB Inactive: 88040 kB Active(anon): 188920 kB Inactive(anon): 56752 kB Active(file): 44520 kB Inactive(file): 31288 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4192956 kB SwapFree: 3966824 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 225112 kB Mapped: 11356 kB Shmem: 32 kB Slab: 1624080 kB SReclaimable: 13740 kB SUnreclaim: 1610340 kB KernelStack: 4176 kB PageTables: 10500 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 6157056 kB Committed_AS: 2397684 kB VmallocTotal: 34359738367 kB VmallocUsed: 441372 kB VmallocChunk: 34359246755 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 4184064 kB slabtop Active / Total Objects (% used) : 9041019 / 9207548 (98.2%) Active / Total Slabs (% used) : 401132 / 401156 (100.0%) Active / Total Caches (% used) : 91 / 159 (57.2%) Active / Total Size (% used) : 1491537.88K / 1519791.56K (98.1%) Minimum / Average / Maximum Object : 0.02K / 0.17K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 4240470 4240319 99% 0.12K 141349 30 565396K pid 2245140 2219675 98% 0.25K 149676 15 598704K size-256 2238090 2210087 98% 0.12K 74603 30 298412K size-128 ...

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • Rsyslog stops sending data to remote server after log rotation

    - by Vincent B.
    In my configuration, I have rsyslog who is in charge of following changes of /home/user/my_app/shared/log/unicorn.stderr.log using imfile. The content is sent to another remote logging server using TCP. When the log file rotates, rsyslog stops sending data to the remote server. I tried reloading rsyslog, sending a HUP signal and restarting it altogether, but nothing worked. The only ways I could find that actually worked were dirty: stop the service, delete the rsyslog stat files and start rsyslog again. All that in a postrotate hook in my logrotate file. kill -9 rsyslog and start it over. Is there a proper way for me to do this without touching rsyslog internals? Rsyslog file $ModLoad immark $ModLoad imudp $ModLoad imtcp $ModLoad imuxsock $ModLoad imklog $ModLoad imfile $template WithoutTimeFormat,"[environment] [%syslogtag%] -- %msg%" $WorkDirectory /var/spool/rsyslog $InputFileName /home/user/my_app/shared/log/unicorn.stderr.log $InputFileTag unicorn-stderr $InputFileStateFile stat-unicorn-stderr $InputFileSeverity info $InputFileFacility local8 $InputFilePollInterval 1 $InputFilePersistStateInterval 1 $InputRunFileMonitor # Forward to remote server if $syslogtag contains 'apache-' then @@my_server:5000;WithoutTimeFormat :syslogtag, contains, "apache-" ~ *.* @@my_server:5000;SyslFormat Logrotate file /home/user/shared/log/*.log { daily missingok dateext rotate 30 compress notifempty extension gz copytruncate create 640 user user sharedscripts post-rotate (stop rsyslog && rm /var/spool/rsyslog/stat-* && start rsyslog 2&1) || true endscript } FYI, the file is readable for the rsyslog user, my server is reachable and other log files which do not rotate on the same cycle continue to be tracked properly. I'm running Ubuntu 12.04.

    Read the article

  • Overheating Toshiba Satellite L300

    - by ldigas
    A coleague of mine's having trouble with his new Toshiba Satellite L300 ... I tool a kook at it, and indeed it's hot as hell. I couldn't hold my hand on it for too long. He says it also has a tendency to turn itself off (WinXP 32bit running) with no forewarning. Hasn't happened to me while I was using it, but that wasn't long anyways. The first guess was it was too dirty ... problem is it's new, came out of a package a quarter of a year ago. Kept in a clean environment (office). Looks clean. No dust in sight. Second guess is that the fan wasn't working properly, cause indeed it has intervals of working, and non working. But when I listen to it, it sounds like normal usage. I took a SpeedFan measurement, and it reports temp. up to 85 Celsius ... which is definitely too high. Anyone knows what else I could do to it ? It is under warranty and it will go to the service, but I thought if there is something we can do, as to avoid carrying it there / be without it for a week ...

    Read the article

  • htaccess on remote server issues - password prompt not accepting input

    - by pying saucepan
    EDIT: I will contact the university about my problem after labor day weekend, but I thought if someone knew a quick fix that I haven't tried, or if the problem has an obvious fix then I could hope to try my luck here, thanks! TLDR: Sorry its a long post, I thought I should be... thorough. I am having a common issue (found a dead thread through google with no solution to the same problem) with the prompt to enter in a username and password via htaccess rights, but this prompt will keep popping up asking for a username and password when trying to access my home directory on my university's server which has the .htaccess and .htpasswd files. It does not matter if I enter in correct or incorrect credentials, the prompt will keep asking me for input without displaying my home directory. Ever since I have included these ht files I have never once been able to get past the username/password no matter what I have tried, save for removing them from the directory I am trying to access (my top level directory that I own). This kind of served my original goal of making the top level directory inaccessible to casual users, but if I wanted to use this method on other places, I would want it to work as intended. And I also like it when computers do what I wish they would, so any help is appreciated. Some things I have tried: Changing the file/directory access rights: they told me to try these commands if people can't access my files cd ~/public_html find ./ -type d -exec chmod 755 {} \; find ./ -type f -exec chmod 644 {} \; enter in the single character name/pw at least twenty times in a row, no cheddar. so I changed directory with cd ~ in hopes that this would be my home directory, since my home directory contains the "public_html" directory, so logic tells me that the ~ tilde symbol is the top level directory that I have ownership of. Then I did those two commands to change the rights on the files inside, I am still having no luck. How I got to this point: I have been following the instructions given to me through my university's website for setting up my little directory. A link on how they describe how to password protect the home directory is given below: "Protect Web Directories" instructions I have everything in order except for one small detail that I feel probably does not matter. I am on windows and so I am using winSCP to remote control my allocated server space. The small detail is that as the instructions indicate (on step 3) that I should use the command htpasswd -c .htpasswd {username} where {username} is my folder that holds my allocated server space. But this command requires further input through the terminal, and unfortunately winSCP does not offer this kind of functionality. So I looked up some basic instructions on using htaccess and it is formatted correctly such that the .htaccess file appears as follows: AuthType Basic AuthName "Verify" AuthUserFile /correctpath/.htpasswd require valid-user and this file is in the root directory for my server space as well as the .htpasswd file which has only this data inside: username:password I know for sure that these two files must be formatted correctly, at least according to their tutorial, because before my path was incorrectly formatted via including some curly { braces } without knowing the correct way to do this at first. And the password prompt that shows up when accessing my directory responded by loading an error page indicating to contact OSU admin or something not important. But now that I have everything like it 'should' be. I know this because when I enter in my credentials "username and password" the prompt pops up for my username and password again and again whether or not I enter in correct information. The only exception is that if I click cancel it will direct me to a page saying that I need to enter in a username and password. Note that I am very inexperienced at server-related buisness, two days ago I couldn't have told you what a website actually consists of. So, if you use some technical jargon I may or may not need to look it up and get back to you before I actually understand what you mean, but I am a quick learner and it probably wont matter.

    Read the article

  • Why do my Xcode default font starts to look ugly after some time, until I restart?

    - by mystify
    I plugged in an external monitor. All resolutions match perfectly. MacBookPro LCD is closed. After restarting Xcode editor fonts look very bad. Only in Xcode. When I restart the mac and DON'T use an external monitor, fonts look all right again. When I attach the monitor and close the LCD of the MacBookPro, fonts look nice. Then I close Xcode and reopen it: Fonts suck. Only way to get fonts look good is to disconnect external monitor and reboot, then reconnect external monitor, close LCD, wait, hit any key and let the external monitor be the only one. Fonts look nice - until I restart Xcode. I think it happens any time Xcode is launched with external monitor attached and ugly fonts survive until reboot. Unplugging external monitor and restarting Xcode doesn't help. It seems like Xcode isn't antialiasing them properly after something happens. Is there a fix for this problem? EDIT: After trying a few more times, it seems it is possible to get fonts to look nice by disconnecting external monitor and reopening xcode. Here are some little snapshots: GOOD FONT: UGLY FONT: You can see how dirty the ugly font looks. It's very hard to read and hurts in the eyes. Believe me. It sucks. Sometimes the little "i" are almost invisible. I make use of the very eye-friendly Dusk style of Xcode (go to preferences and choose that, if you haven't already. A real pleasure for your eyes)

    Read the article

  • How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)

    - by le_me
    I'm using linux. I want a process (an irc bot) to run every time I start the computer. But I've got a problem: The network is bad and it disconnects often, so I need to manually restart the bot a few times a day. How do I automate that? Additional information: The bot creates a pid file, called bot.pid The bot reconnects itself, but only a few times. The network is too bad, so the bot kills itself sometimes because it gets no response. What I do currently (aka my approach ;) ) I have a cron job executing startbot.rb every 5 minutes. (The script itself is in the same directory as the bot) The script: #!/usr/bin/ruby require 'fileutils' if File.exists?(File.expand_path('tmp/bot.pid')) @pid = File.read(File.expand_path('tmp/bot.pid')).chomp!.to_i begin raise "ouch" if Process.kill(0, @pid) != 1 rescue puts "Removing abandoned pid file" FileUtils.rm(File.expand_path('tmp/bot.pid')) puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) else puts "Bot up and running!" end else puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) end What this does: It checks if the pid file exists, if that's true it checks if kill -s 0 BOT_PID == 1 (if the bot's running) and starts the bot if one of the two checks fail/are not true. My approach seems to be quite dirty so how do I do it better?

    Read the article

  • Monitoring memcached with plink

    - by kojiro
    I need a telnet client that can take commands from a file or stdin so I can do some quick-and-dirty automatic monitoring of memcached. I thought plink would be good for this, but it seems to be doing something beyond what I need: If I telnet into localhost 11211 and write stats, I get the memcached stats, like so: $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats STAT pid 25099 STAT uptime 91182 STAT time 1349191864 STAT version 1.4.5 STAT pointer_size 64 STAT rusage_user 3.570000 STAT rusage_system 2.740000 STAT curr_connections 5 STAT total_connections 23 STAT connection_structures 11 STAT cmd_get 0 STAT cmd_set 0 STAT cmd_flush 0 STAT get_hits 0 STAT get_misses 0 STAT delete_misses 0 STAT delete_hits 0 STAT incr_misses 0 STAT incr_hits 0 STAT decr_misses 0 STAT decr_hits 0 STAT cas_misses 0 STAT cas_hits 0 STAT cas_badval 0 STAT auth_cmds 0 STAT auth_errors 0 STAT bytes_read 82184 STAT bytes_written 7210 STAT limit_maxbytes 67108864 STAT accepting_conns 1 STAT listen_disabled_num 0 STAT threads 4 STAT conn_yields 0 STAT bytes 0 STAT curr_items 0 STAT total_items 0 STAT evictions 0 STAT reclaimed 0 END But with plink, I get an odd error. I'm using this command: watch -n 30 plink -v -telnet -P 11211 127.0.0.1 <<< $'\nstats' The first time through I get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA ERROR STAT pid 25099 STAT uptime 91245 STAT time 1349191927 STAT version 1.4.5 … END But when watch repeats the command I just get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA Failed to connect to 127.0.0.1: Connection reset by peer Connection reset by peer FATAL ERROR: Connection reset by peer What is plink doing here that is different from normal telnet? How should I be going about this? (I'm not married to plink, but I need a way to continuously send simple telnet commands to memcached without writing a full-fledged perl script.)

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • Technology mash: is this possible?

    - by Jon Story
    I'm in the process of setting up my own DNS+hosting on a couple of VPS and my home machines, mostly for academic/learning purposes, but also for convenient accessing of my files, hosting my personal websites, private git repositories etc. I've got a main web server with DNS, and a slave DNS server. I've also got a couple of machines at home doing file hosting, video streaming and all that fun stuff. I'm intending to use my VPS's to provide myself with a dynamic DNS system so that I can point mydomain.com at my DNS servers, with home.mydomain.com going into my home network via a raspberry pi. HOWEVER.... I've not got access to the network infrastructure at home (rented accommodation with managed internet), so I can't forward the ports on the router to my own machines. As such, I'm wondering if it's possible to route all the traffic via an SSH/HTTP tunnel through one of the VPS? My plan is to have the raspberry pi provide a VPN into my home network. The raspberry pi uses SSH to connect to the VPS, and the VPS forwards any traffic to home.mydomain.com via the tunnel to the raspberry pi. Is this even possible, and how do I go about it? I don't mind getting my hands dirty with coding and low level tools, I'm just not sure where to start or what the best way to go about it is.

    Read the article

  • Do hard drive enclosures fail/is it the HDD or enclosure?

    - by x0a
    I'm having a whole host of problems with an external hard drive that was working just fine a couple of hours ago. I've had this problem before once, and that was about 3 months ago, here's what I documented: So a couple of hours ago I turned off all my computers and shut off the power to all my devices in my room, then went and turned the power off at the main switch so I could change an outlet. A couple hours later, after I've already slowly turned everything back on, I go to my xbox to try and watch a movie and it can't seem to list any of the movies I've got. So I go to my desktop to find that my external hard drive isn't there.. even though it's on and connected. It's also stationary and hidden behind something so there's not a whole lot of tampering/physical wear to that external. I plug it into my laptop to try and see what's going on. It starts making this endless loud screeching noise. None of that clicking that's usually associated with hd damage. It's not listed in my computers, and it shows up in Disk Management as "uninitialized" asking me to choose between two different partition types. After carefully disconnecting it and connecting it back, it asks me to format it, which I cancel. I start googling about my issue, starting to accept the situation, torn as hell and helpless and just about ready to toss the thing. Suddenly the screeching stops, after almost 45 minutes of it going, and Disk Management lists the drive as "Online" and "Healthy". Explorer pops up with all my files! I'm still being really careful with it and weary and treating it as though it's in fragile shape. I've downloaded some S.M.A.R.T. software to read the values and everything is listed as "OK" . No reallocated sectors, no read errors, no seek errors. I also ran a quick self-test, which completed without error. Everything seems fine. It looks to be a perfectly healthy external hard drive. So what the hell was that about? Was it doing some sort of maintenance or self-test? How am I supposed to tell the difference? I would've undoubtedly killed the drive for sure if had it gone on a bit longer. I've got the same problem now, with one exception: it doesn't magically reappear after the screeching stops. Occasionally I manage to get some S.M.A.R.T. diagnostics information, which basically reads everything as fine. The only problem is that my HD isn't initializing (so I can't access anything in it). I'm able to successfully run a quick smart test but not an extended one (I've only tried it once but got conflicting indications as to whether it was actually making any progress or not (was stuck on Random read test). So, final question (if all else fails): Could the hard drive enclosure be failing rather than the HDD? Is this a likely possibility at all? How would I know?

    Read the article

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Wireless card overheating?

    - by Sidney
    Ok, so I've had my laptop for several years (I wanna say 4, but possibly more), it's a Toshiba Satellite. I'm running Linux mint 15, and am having a strange new issue, after several hours of running my wireless stops. It can SEE wireless networks, but refuses to connect to any of them. (On a sidenote, connecting to a router with a cable at this point works fine) The fact that it can SEE the networks make me think the card is in good condition, and it's software related The fact that it works for several hours before booting me makes me think perhaps the transmitter is getting too hot. I don't use my laptop in dusty environments, and keep it on an elevated surface (alternatively, I actively try not to let it sit on soft surfaces where the vents get covered). I spray out the cpu fan about once a year with compressed air about once a year, so I really don't think the insides should be too dirty. Finally, unfortunately, sensors only gives me CPU temps, but they run about 40-50 degrees C, which from my understanding is perfectly normal for an I3. Does anyone have any suggestions on what I can do to determine the root cause of this?

    Read the article

  • Is there a clean way to obtain exclusive access to a physical partition under Windows?

    - by zneak
    Hey guys, I'm trying, under Windows 7, to run a virtual machine with VMWare Player from an OS installed on a physical partition. However, when I boot the virtual machine, VMWare Player says that it couldn't access the physical drive for writing. This seems to be a generally acknowledged problem in the VMWare community, as Windows Vista introduced a compelling new security feature that makes it impossible to write to a raw drive without obtaining exclusive access to it first. I have googled the issue and found a few workarounds. However, the clean ones seem to only work on whole physical disks, and not on partitions. So I would be left with the dirty solution. In short, it meddles with the MBR to erase any trace of the partitions to use, makes Windows forget about them, then restores the MBR so we can launch the VM. I'm not sure I want to do that. Is there a way to let VMWare acquire exclusive access to the partition without requiring me to nuke it away? What I'd be looking for, I suppose, is a way to put just partitions offline instead of whole physical drives.

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • NTFS frequent corruption when writing many small files, index $I30 error

    - by david sedai
    I'm running Windows 7 Ultimate on a laptop with a 500G HDD, and had all partitions formatted as NTFS. I do a lot of programming and LaTeX typesetting, both of these involves a large amount of reading/writing/deleting to a lot of small files, such as C++ library headers or LaTeX packages. The problem is that frequently, when there is a large number of writing to files, the partition being written to often corrupts, the chkntfs e: returns dirty, where e: is the partition being written. I've re-formatted the drive, I've contacted the laptop manufacturer and had the HDD checked, the HDD is not faulty, there are no bad sectors, and I've tried a brand new HDD, to no avail, and the other partition on the same physical drive doesn't have this issue. I'm pretty sure that it's no hardware related. I've searched the Microsoft support pages, one page http://support.microsoft.com/kb/982018 provides an update for Advanced Format Disks, which I've already installed. The chkntfs log shows $130 index errors. I'm at a loss here. Can anyone help? Thanks in advance.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >