Search Results

Search found 9982 results on 400 pages for 'state monad'.

Page 372/400 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • How do I determine if my controller is in IDE or AHCI mode in Linux?

    - by philcolbourn
    I have an old MacBook Pro 4,1 (early 2008) - but I suspect an answer would apply to many MacBook Pros. It has an Intel IDE/SATA controller (ICH8M/ICH8M-E). I have installed a patched MBR that is supposed to put my controller into AHCI mode. It does this by setting some controller port value that I don't understand. This seems to work as I get this from lspci: 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) Now most, perhaps all, sites that provide a solution (enabling AHCI) suggest that after a sleep/wake cycle that a controller will revert to IDE mode due to how Apple support Windows. They recommend disabling sleep. From author of patchedcode.bin I think Enabling AHCI for Windows on MacBooks NB: I do not have bootcamp installed and I do not have Windows installed. Is there a way to prove that my controller is in IDE or AHCI mode? Background Data Using patchedcode.bin MBR I get this in syslog: Jun 12 22:33:22 max kernel: [ 1.860955] ahci 0000:00:1f.2: version 3.0 Jun 12 22:33:22 max kernel: [ 1.861052] ahci 0000:00:1f.2: irq 45 for MSI/MSI-X Jun 12 22:33:22 max kernel: [ 1.861117] ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 3 ports 1.5 Gbps 0x1 impl SATA mode Jun 12 22:33:22 max kernel: [ 1.861120] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ccc ems Jun 12 22:33:22 max kernel: [ 1.861130] ahci 0000:00:1f.2: setting latency timer to 64 Jun 12 22:33:22 max kernel: [ 1.880880] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) Jun 12 22:33:22 max kernel: [ 1.880983] scsi2 : ahci Jun 12 22:33:22 max kernel: [ 1.884552] scsi3 : ahci Jun 12 22:33:22 max kernel: [ 1.886932] scsi4 : ahci Jun 12 22:33:22 max kernel: [ 1.886998] ata3: SATA max UDMA/133 abar m2048@0xdb504000 port 0xdb504100 irq 45 Jun 12 22:33:22 max kernel: [ 1.887000] ata4: DUMMY Jun 12 22:33:22 max kernel: [ 1.887002] ata5: DUMMY Jun 12 22:33:22 max kernel: [ 2.204103] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 12 22:33:22 max kernel: [ 2.204656] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 12 22:33:22 max kernel: [ 2.204662] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 12 22:33:22 max kernel: [ 2.205324] ata3.00: configured for UDMA/100 Jun 12 22:33:22 max kernel: [ 2.205554] scsi 2:0:0:0: Direct-Access ATA FUJITSU MHY2200B 0081 PQ: 0 ANSI: 5 Using my original MBR I get this from syslog: Jun 13 18:07:13 max kernel: [ 0.622861] ata_piix 0000:00:1f.1: version 2.13 Jun 13 18:07:13 max kernel: [ 0.622869] ata_piix 0000:00:1f.1: power state changed by ACPI to D0 Jun 13 18:07:13 max kernel: [ 0.622924] ata_piix 0000:00:1f.1: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.623339] scsi0 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623730] scsi1 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623765] ata1: PATA max UDMA/100 cmd 0x8108 ctl 0x811c bmdma 0x80e0 irq 21 Jun 13 18:07:13 max kernel: [ 0.623767] ata2: PATA max UDMA/100 cmd 0x8100 ctl 0x8118 bmdma 0x80e8 irq 21 Jun 13 18:07:13 max kernel: [ 0.623810] ata_piix 0000:00:1f.2: MAP [ Jun 13 18:07:13 max kernel: [ 0.623811] P0 -- -- -- ] Jun 13 18:07:13 max kernel: [ 0.623866] ata_piix 0000:00:1f.2: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.624241] scsi2 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624558] scsi3 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624862] ata3: SATA max UDMA/133 cmd 0x80f8 ctl 0x8114 bmdma 0x8020 irq 18 Jun 13 18:07:13 max kernel: [ 0.624865] ata4: SATA max UDMA/133 cmd 0x80f0 ctl 0x8110 bmdma 0x8028 irq 18 Jun 13 18:07:13 max kernel: [ 1.208879] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 13 18:07:13 max kernel: [ 1.208882] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 0/32) Jun 13 18:07:13 max kernel: [ 1.208961] ata1.01: ATAPI: MATSHITA DVD+/-RW UJ-867S, 1.00, max UDMA/33 Jun 13 18:07:13 max kernel: [ 1.216186] ata3.00: configured for UDMA/100 Jun 13 18:07:13 max kernel: [ 1.224396] ata1.01: configured for UDMA/33

    Read the article

  • How do you handle authentication across domains?

    - by William Ratcliff
    I'm trying to save users of our services from having to have multiple accounts/passwords. I'm in a large organization and there's one group that handles part of user authentication for users who are from outside the facility (primarily for administrative functions). They store a secure cookie to establish a session and communicate only via HTTPS via the browser. Sessions expire either through: 1) explicit logout of the user 2) Inactivity 3) Browser closes My team is trying to write a web application to help users analyze data that they've taken (or are currently taking) while at our facility. We need to determine if a user is 1) authenticated 2) Some identifier for that user so we can store state for them (what analysis they are working on, etc.) So, the problem is how do you authenticate across domains (the authentication server for the other application lives in a border region between public and private--we will live in the public region). We have come up with some scenarios and I'd like advice about what is best practice, or if there is one we haven't considered. Let's start with the case where the user is authenticated with the authentication server. 1) The authentication server leaves a public cookie in the browser with their primary key for a user. If this is deemed sensitive, they encrypt it on their server and we have the key to decrypt it on our server. When the user visits our site, we check for this public cookie. We extract the user_id and use a public api for the authentication server to request if the user is logged in. If they are, they send us a response with: response={ userid :we can then map this to our own user ids. If necessary, we can request additional information such as email-address/display name once (to notify them if long running jobs are done, or to share results with other people, like with google_docs). account_is_active:Make sure that the account is still valid session_is_active: Is their session still active? If we query this for a valid user, this will have a side effect that we will reset the last_time_session_activated value and thus prolong their session with the authentication server last_time_session_activated: let us know how much time they have left ip_address_session_started_from:make sure the person at our site is coming from the same ip as they started the session at } Given this response, we either accept them as authenticated and move on with our app, or redirect them to the login page for the authentication server (question: if we give an encrypted portion of the response (signed by us) with the page to redirect them to, do we open any gaping security holes in the authentication server)? The flaw that we've found with this is that if the user visits evilsite.com and they look at the session cookie and send a query to the public api of the authentication server, they can keep the session alive and if our original user leaves the machine without logging out, then the next user will be able to access their session (this was possible before, but having the session alive eternally makes this worse). 2) The authentication server redirects all requests made to our domain to us and we send responses back through them to the user. Essentially, they act as a proxy. The advantage of this is that we can handshake with the authentication server, so it's safe to be trusted with the email address/name of the user and they don't have to reenter it So, if the user tries to go to: authentication_site/mysite_page1 they are redirected to mysite. Which would you choose, or is there a better way? The goal is to minimize the "Yet Another Password/Yet another username" problem... Thanks!!!!

    Read the article

  • Can't launch glassfish on ec2 - can't open port

    - by orange80
    I'm trying to start glassfish on an EBS-based AMI of Ubuntu 10.04 64-bit. I have used glassfish on non-ec2 servers with no problems, but on ec2 I get this message: $ sudo -u glassfish bin/asadmin start-domain domain1 There is a process already using the admin port 4848 -- it probably is another instance of a GlassFish server. Command start-domain failed. I know that ec2 has requires that firewall rules be modified using ec2-authorize to let outside traffic thru the firewall, as I had to do to make ssh work. This still doesn't explain the port error when all I'm trying to do is start glassfish so I can try $ wget localhost:8080and make sure it's working. This is very frustrating and I'd really appreciate any help. Thanks. FINAL UPDATE: Sorry if you came here looking for answers. I never figured out what was causing the problem. I created another fresh instance, installed the same stuff, and Glassfish worked perfectly. Something obviously got boned during installation, but I have no idea what. I guess it will remain a mystery. UPDATE: Here's what I get from netstat: # netstat -nuptl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 462/sshd tcp6 0 0 :::22 :::* LISTEN 462/sshd udp 0 0 0.0.0.0:5353 0.0.0.0:* 483/avahi-daemon: r udp 0 0 0.0.0.0:1194 0.0.0.0:* 589/openvpn udp 0 0 0.0.0.0:37940 0.0.0.0:* 483/avahi-daemon: r udp 0 0 0.0.0.0:68 0.0.0.0:* 377/dhclient3 UPDATE: One more thing... I know that the "net.ipv6.bindv6only" kernel option can cause problems with java networking, so I did set this: # sysctl -w net.ipv6.bindv6only=0 UPDATE: I also verified that it has nothing at all to do with the port number (4848). As you can see here, when I changed the admin-listener port in domain.xml to 4949, I get a similar message: # sudo -u glassfish bin/asadmin start-domain domain1 There is a process already using the admin port 4949 -- it probably is another instance of a GlassFish server. Command start-domain failed. UPDATE: Here are the contents of /etc/hosts: 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts I should mention that I have another Ubuntu Lucid 10.04 64-bit slice that is NOT hosted on ec2, and set it up the exact same way with no problems whatsoever. Also server.log doesn't offer much insight either: # cat ./server.log Nov 20, 2010 8:46:49 AM com.sun.enterprise.admin.launcher.GFLauncherLogger info INFO: JVM invocation command line: /usr/lib/jvm/java-6-sun-1.6.0.22/bin/java -cp /opt/glassfishv3/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=192m -XX:NewRatio=2 -XX:+LogVMOutput -XX:LogFile=/opt/glassfishv3/glassfish/domains/domain1/logs/jvm.log -Xmx512m -client -javaagent:/opt/glassfishv3/glassfish/lib/monitor/btrace-agent.jar=unsafe=true,noServer=true -Dosgi.shell.telnet.maxconn=1 -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Dfelix.fileinstall.dir=/opt/glassfishv3/glassfish/modules/autostart/ -Djavax.net.ssl.keyStore=/opt/glassfishv3/glassfish/domains/domain1/config/keystore.jks -Dosgi.shell.telnet.port=6666 -Djava.security.policy=/opt/glassfishv3/glassfish/domains/domain1/config/server.policy -Dfelix.fileinstall.poll=5000 -Dcom.sun.aas.instanceRoot=/opt/glassfishv3/glassfish/domains/domain1 -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dosgi.shell.telnet.ip=127.0.0.1 -Djava.endorsed.dirs=/opt/glassfishv3/glassfish/modules/endorsed:/opt/glassfishv3/glassfish/lib/endorsed -Dcom.sun.aas.installRoot=/opt/glassfishv3/glassfish -Djava.ext.dirs=/usr/lib/jvm/java-6-sun-1.6.0.22/lib/ext:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/ext:/opt/glassfishv3/glassfish/domains/domain1/lib/ext -Dfelix.fileinstall.bundles.new.start=true -Djavax.net.ssl.trustStore=/opt/glassfishv3/glassfish/domains/domain1/config/cacerts.jks -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Djava.security.auth.login.config=/opt/glassfishv3/glassfish/domains/domain1/config/login.conf -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dfelix.fileinstall.debug=1 -Dorg.glassfish.web.rfc2109_cookie_names_enforced=false -Djava.library.path=/opt/glassfishv3/glassfish/lib:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/amd64/server:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/amd64:/usr/lib/jvm/java-6-sun-1.6.0.22/lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -domainname domain1 -asadmin-args start-domain,,,domain1 -instancename server -verbose false -debug false -asadmin-classpath /opt/glassfishv3/glassfish/modules/admin-cli.jar -asadmin-classname com.sun.enterprise.admin.cli.AsadminMain -upgrade false -domaindir /opt/glassfishv3/glassfish/domains/domain1 -read-stdin true

    Read the article

  • How do I achieve lossless JPEG joining without truncation of partial MCUs?

    - by Karan
    I am working on a project for which I need to join thousands of JPEG images losslessly (I'm not talking about the Lossless JPEG/JPEG 2000/JPEG-LS formats here). Aforementioned images have varying levels of chroma subsampling (1x1, 1x2, 2x1, 2x2), resulting in varying MCU sizes (8x8, 8x16, 16x8, 16x16 px). However, in any given set of images to be joined together, each image has identical characteristics. For now, let's assume I only have 2 images. Image #1 (I1) is 256x256px in size and #2 (I2) is 239x256px in size. 2x2 subsampling is used such that MCU size is 16x16px. I2 thus obviously has partial MCUs at the right edge, since its width is not evenly divisible by 16. (I've read that so-called 'partial' MCUs actually contain the data for a complete MCU, but the image dimensions instruct the renderer to only display the relevant pixels and ignore/hide the extra ones.) Looking around for tools that could help me accomplish this, I came across a modified version of JpegTran, that contains an experimental lossless crop 'n' drop (cut & paste) feature. All the other apps I encountered that support lossless JPEG editing seem to utilise IJG's (JpegTran) code, so this seemed to be the logical choice. Also, given the sheer number of images, I wanted something that could preferably be run from the command-line so that I could automate the process with a script. Unfortunately, while everything else worked fine, it seems JpegTran truncates the partial MCUs instead of retaining them. Thus in the example above, the final joined image contains all of I1, but only 224x256px of I2. Why 224? because 239 = 14x16+15, which means there are 14 full MCUs along the width, and 1 partial MCU (just 1px short of the complete 16px). The last 15px is what is getting blanked, leading to a 495x256px image with 15px of blank (grey) pixels at the right edge. See images below (shame that imgur re-compresses them): (left )+ (right) = As you can clearly see, the red portion (15px) of I2 has been truncated by JpegTran. If the MCUs were 8px in width, the lost portion would have been the right-most 7px of I2. Similarly, joining I3 (256x239px) *below * I1 would cause the loss of 7 or 15px, depending on the MCU height of course: (top) + (bottom) = If this is better suited to some other StackExchange (or even non-SE) site/forum where JPEG/image encoding experts hang out, do let me know. Can what I am attempting even be done, or is the so-called 'lossless' JPEG crop 'n' drop only valid for images with no partial MCUs? (Maybe that is why the feature is still in an "experimental state" more than a decade after being introduced...) Until I know for sure that it is impossible, I am not interested in suggestions for lossy joining. Avoiding any generational loss whatsoever is the sole reason why I'm breaking my head over this, else I'd have had this done and dusted ages ago. Also, I am not interested in suggestions related to switching image formats. I do not control the source of the images. If it can be done, how? Please keep in mind that any alternate apps suggested must ideally be capable of automation, given the requirements stated above. (But given how it's unlikely I'm even going to receive a useful answer given the constraints, I would be happy with any app suggestion just as long as it actually works. I can always look into an AutoIT/AHK script or something later to automate it.) I understand that an odd-sized final image might cause issues, so I am fully prepared to accept any solution, even if it results in blank (preferably black) padding pixels to the right/bottom. What I mean is, I don't care if I1 + I2 is 496x256px (1px padding) or even 512x256px (17px padding) in size, as long as the final image contains all the actual image data from both source images, and the entire process is lossless. Obviously the lesser the padding (if any), the better, but at this point any solution will do. A Windows-based solution would be perfect, but a Linux-based one would be entirely acceptable.

    Read the article

  • Cannot ping Localhost so I can't shutdown Tomcat

    - by gav
    Hi, I installed Tomcat 6 using the tar-ball via wget. Startup of the server is fine but on shutdown I get a timeout exception. root@88:/usr/local/tomcat/logs# /usr/local/tomcat/bin/shutdown.sh Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar 30-Mar-2010 17:33:41 org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) ... I read that this might be because I have a firewall blocking incoming connections on the shutdown port (8005). I have a default Ubuntu 9.04 installation running on a VPS with no rules in my iptables. How can I tell if that port is blocked? How can I check that the server is listening for connections on 8005? Bizarrely pinging localhost or the IP of my server fails from the server itself, whereas pinging the IP of my server from another machine succeeds. -------- EDIT -------- (In reply to Davey) Thanks for all the tips and suggestions! netstat -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 9611/java tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 28505/mysqld tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 9611/java tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN ... So we can see that tomcat is listening, I just don't seem to be able to reach it. root@88:/usr/local/tomcat# telnet localhost 8005 Trying 127.0.0.1... Trying to telnet to the port Hangs indefinitely. I have no rules in my iptables so I don't think it's a firewall thing. root@88:/usr/local/tomcat# iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is the contents of /etc/hosts 127.0.0.1 localhost.localdomain localhost # Auto-generated hostname. Please do not remove this comment. 88.198.31.14 88.198.31.14 88 88 But I still can't ping localhost... do I need to check a loopback device is enabled properly or something? (I'm unsure how to do that if you do say yes :)). root@88:/usr/local/tomcat# ping localhost PING localhost (127.0.0.1) 56(84) bytes of data. --- localhost ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 5999ms Trying to find out what the loop back is configured as; root@88:~# ifconfig lo lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) SOLUTION THANKS TO DAVEY I needed to bring up the interface (Not sure why it wasn't running). ifconfig lo up did the trick. root@88:~# ifconfig lo up root@88:~# ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@88:~# ping localhost PING localhost.localdomain (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.025 ms Thanks again, Gav

    Read the article

  • Varnish + Nginx + multiple IP addresses

    - by adnan
    This is my first shot at making Varnish work on my dedicated server which hosts 2 domains with 2 separate IP-addresses. My simplified setup is as follows: Nginx conf server { listen ip-address-1:8080; } server { listen ip-address-2:8080; } Varnish vcl backend default { .host = "127.0.0.1"; .port = "80"; } And in the varnish conf I have defined VARNISH_LISTEN_PORT=80 Varnish and Nginx (and php-fpm) are running properly but when I try to go to my website it shows the welcome to nginx page. The headers don't have the x-varnish in it. It seems that for some reason varnish is not listening to port 80. I'm suspecting this has to do with the vcl file where it is listening to the 127.0.0.1 host. I'm running two wordpress sites. Where should I look for to get Varnish working properly? Cheers, Adnan EDIT: Nginx seems to be in 8080 correctly but Varnish is not listening to the right ip address. Using Jens multiple varnish ip addresses netstat -lnp yields: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 46.105.40.241:8080 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 5.135.166.39:8080 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 2544/named tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 1195/vsftpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1184/sshd tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 2544/named tcp 0 0 46.105.40.241:443 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 5.135.166.39:443 0.0.0.0:* LISTEN 21610/nginx tcp 0 0 127.0.0.1:6082 0.0.0.0:* LISTEN 21350/varnishd tcp 0 0 :::80 :::* LISTEN 21351/varnishd tcp 0 0 ::1:53 :::* LISTEN 2544/named tcp 0 0 :::22 :::* LISTEN 1184/sshd tcp 0 0 ::1:953 :::* LISTEN 2544/named udp 0 0 127.0.0.1:53 0.0.0.0:* 2544/named udp 0 0 ::1:53 :::* 2544/named default.vcl backend ikhebeenbril { .host = "5.135.166.39"; .port = "8080"; } backend sunculture { .host = "46.105.40.241"; .port = "8080"; } sub vcl_recv { if (server.ip == "5.135.166.39") { set req.backend = ikhebeenbril; } else { set req.backend = sunculture; } ... } sub vcl_hash { hash_data(server.ip); if (req.http.host) { hash_data(req.http.host); } hash_data(req.url); if (req.http.Accept-Encoding) { hash_data(req.http.Accept-Encoding); } return (hash); } nginx server blocks server { listen 5.135.166.39:80; listen 5.135.166.39:443 default ssl spdy; server_name www.ikhebeenbril.nl; } server { listen 46.105.40.241:80; listen 46.105.40.241:443 default ssl spdy; server_name www.thesunculture.com; }

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • Broken cups installation on a ubuntu server 64

    - by user67046
    Hi, I am having trouble with an cups installation. It seems to be in a broken state. When i try to reinstall it it stalls, the same if i try to remove it completely. I am running the server version 64 bit of Ubuntu 10.10 with kernel Linux version 2.6.35-22-server. When i try to start the cups daemon with the following command sudo service cups start It just stays there and nothing happens. I have tried to remove it, to be able to reinstall it, with the following command sudo apt-get purge cups It finally stalls with the following message Removing cups ... After that nothing happens. The process tree for the apt-get command looks like this. 1404 1404 1404 ? 00:00:00 sshd 26495 26495 26495 ? 00:00:00 sshd 26581 26495 26495 ? 00:00:00 sshd 26582 26582 26582 pts/4 00:00:00 bash 27158 27158 26582 pts/4 00:00:00 apt-get 27172 27172 27172 pts/2 00:00:00 dpkg 27176 27172 27172 pts/2 00:00:00 cups.prerm 27178 27172 27172 pts/2 00:00:00 stop I have tried to leave the process running for a while to see if i get any error messages but without success. To get out of it I have to kill the processes. sudo dpkg --configure cups dpkg: error processing cups (--configure): package cups is already installed and configured Errors were encountered while processing: cups sudo dpkg --status cups Package: cups Status: purge ok installed Priority: optional Section: net Installed-Size: 8292 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Version: 1.4.4-6ubuntu2.3 Replaces: cupsddk-drivers (<< 1.4.0) Provides: cupsddk-drivers Depends: libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16), libc6 (>= 2.7), libcups2 (>= 1.4.4-3~), libcupscgi1 (>= 1.4.2), libcupsdriver1 (>= 1.4.0), libcupsimage2 (>= 1.4.0), libcupsmime1 (>= 1.4.0), libcupsppdc1 (>= 1.4.0), libdbus-1-3 (>= 1.0.2), libgcc1 (>= 1:4.1.1), libgnutls26 (>= 2.7.14-0), libgssapi-krb5-2 (>= 1.8+dfsg), libijs-0.35, libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpaper1, libpoppler7, libslp1, libstdc++6 (>= 4.1.1), libusb-0.1-4 (>= 2:0.1.12), zlib1g (>= 1:1.1.4), debconf (>= 1.2.9) | debconf-2.0, upstart-job, poppler-utils (>= 0.12), procps, ghostscript, lsb-base (>= 3), cups-common (>= 1.4.4), cups-client (>= 1.4.4-6ubuntu2.3), ssl-cert (>= 1.0.11), adduser, bc, ttf-freefont, cups-ppdc Recommends: foomatic-filters (>= 4.0), cups-driver-gutenprint, ghostscript-cups Suggests: cups-bsd, foomatic-db-compressed-ppds | foomatic-db, hplip, xpdf-korean | xpdf-japanese | xpdf-chinese-traditional | xpdf-chinese-simplified, cups-pdf, smbclient (>= 3.0.9), udev Breaks: foomatic-filters (<< 4.0) Conflicts: cupsddk-drivers (<< 1.4.0) Conffiles: /etc/fonts/conf.d/99pdftoopvp.conf a5221cfad70a981c80864229ef56586d /etc/logrotate.d/cups 5bb41fa9900f0d1c565954405a2bd7c4 /etc/default/cups 2b436fbb1a32b82b6aba45a76a1d7e40 /etc/pam.d/cups ff2488324854f7b1e892bb0df062d5f0 /etc/init/cups.conf 1a3cd022e8474e3d2b44640f33ce68e3 /etc/ufw/applications.d/cups 29e98a6d850da251e180c3d68dec2bd3 /etc/apparmor.d/usr.sbin.cupsd 60c4b26bfd5c033baa3dd48a3b2e9911 /etc/cups/cupsd.conf e2c7ec15835ea0939e5e86f7c6efcc03 /etc/cups/snmp.conf 2326a8af1e112676d55245bc5eb459ca /etc/cups/cupsd.conf.default a68d54d76021e857dd1d64edf57d36c5 Description: Common UNIX Printing System(tm) - server The Common UNIX Printing System (or CUPS(tm)) is a printing system and general replacement for lpd and the like. It supports the Internet Printing Protocol (IPP), and has its own filtering driver model for handling various document types. . This package provides the CUPS scheduler/daemon and related files. Original-Maintainer: Debian CUPS Maintainers <[email protected]> Would be greatful if someone could provide some help on how to solve this issue.

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • install python2.7.3 + numpy + scipy + matplotlib + scikits.statsmodels + pandas0.7.3 correctly

    - by boldnik
    ...using Linux (xubuntu). How to install python2.7.3 + numpy + scipy + matplotlib + scikits.statsmodels + pandas0.7.3 correctly ? My final aim is to have them working. The problem: ~$ python --version Python 2.7.3 so i already have a system-default 2.7.3, which is good! ~$ dpkg -s python-numpy Package: python-numpy Status: install ok installed and i already have numpy installed! great! But... ~$ python Python 2.7.3 (default, Oct 23 2012, 01:07:38) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as nmp Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named numpy this module couldn't be find by python. The same with scipy, matplotlib. Why? ~$ sudo apt-get install python-numpy [...] Reading package lists... Done Building dependency tree Reading state information... Done python-numpy is already the newest version. [...] why it does not see numpy and others ? update: >>> import sys >>> print sys.path ['', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7', '/usr/local/lib/python2.7/plat-linux2', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages'] >>> so i do have /usr/local/lib/python2.7 ~$ pip freeze Warning: cannot find svn location for distribute==0.6.16dev-r0 BzrTools==2.4.0 CDApplet==1.0 [...] matplotlib==1.0.1 mutagen==1.19 numpy==1.5.1 [...] pandas==0.7.3 papyon==0.5.5 [...] pytz==2012g pyxdg==0.19 reportlab==2.5 scikits.statsmodels==0.3.1 scipy==0.11.0 [...] zope.interface==3.6.1 as you can see, those modules are already installed! But! ls -la /usr/local/lib/ gives ONLY python2.7 dir. And still ~$ python -V Python 2.7.3 and import sys sys.version '2.7.3 (default, Oct 23 2012, 01:07:38) \n[GCC 4.6.1]' updated: Probably I've missed another instance... One at /usr/Python-2.7.3/ and second (seems to be installed "by hands" far far ago) at /usr/python2.7.3/Python-2.7.3/ But how two identical versions can work at the same time??? Probably, one of them is "disabled" (not used by any program, but I don't know how to check if any program uses it). ~$ ls -la /usr/bin/python* lrwxrwxrwx 1 root root 9 2011-11-01 11:11 /usr/bin/python -> python2.7 -rwxr-xr-x 1 root root 2476800 2012-09-28 19:48 /usr/bin/python2.6 -rwxr-xr-x 1 root root 1452 2012-09-28 19:45 /usr/bin/python2.6-config -rwxr-xr-x 1 root root 2586060 2012-07-21 01:42 /usr/bin/python2.7 -rwxr-xr-x 1 root root 1652 2012-07-21 01:40 /usr/bin/python2.7-config lrwxrwxrwx 1 root root 9 2011-10-05 23:53 /usr/bin/python3 -> python3.2 lrwxrwxrwx 1 root root 11 2011-09-06 02:04 /usr/bin/python3.2 -> python3.2mu -rwxr-xr-x 1 root root 2852896 2011-09-06 02:04 /usr/bin/python3.2mu lrwxrwxrwx 1 root root 16 2011-10-08 19:50 /usr/bin/python-config -> python2.7-config there is a symlink python-python2.7, maybe I can ln -f -s this link to exact /usr/Python-2.7.3/python destination without harm ?? And how correctly to remove the 'copy' of 2.7.3?

    Read the article

  • Chrome does not re-draw properly on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites? Computer Freeze: Chrome sometimes, randomly, freezes my computer. Freeze in the sense, none of the other apps are also working. It's like the whole system freezes, I can not even switch to other apps. I am sure that this is because of Chrome since this happens only when Chrome is active. Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). UPDATE 3: @NothingsImpossible said that He is also experiencing the same problem on Windows Server 2008! This seams to be a major issue now. He also said that GPU load is also high at the same time! Even I saw the same thing. UPDATE 4: Recently, Chrome updated to v24 Stable (I am using stable from a long time now). I was experiencing this problem a lot less in Chrome 23, but this is back in Chrome 24. Seams like Chrome 24 is the most affected from this bug, as this same problem was high in Chrome 24 beta also. UPDATE 5: Chrome was updated to v25 Stable. This problem is 99% Gone, it is still there in 1% of the cases. One such example is when I leave chrome inactive for a while with a few tabs open, the tabs go black and no activity can get them back to active state. If I open a new tab, the new tab is OK but the others are still black, I need to close all those tabs. UPDATE 6: Chrome updated to v27 Stable channel, this problem is nearly gone. This does happen occasionally, but not as frequent as in earlier versions of Chrome. UPDATE 7: I am on Chrome v35.0.1916.114 Stable, Windows 8.1 Pro Update 1. Some of the other problems appears to be back. Chrome is slow and laggy again. Re-draw time is getting worse. Is anybody else experiencing such issues? Does anybody have a solution to any of these?

    Read the article

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • Cannot read status the monit daemon, even with allowed group

    - by jefflunt
    I cannot seem to get monit status or other CLI commands to work. I've built monit v5.8 to run on a Raspberry Pi. I'm able to add services to be monitored, and the web interface can be accessed just fine, as I've set it up for public read-only access (it's a test server, not my final production setup, so not a big deal right now). Problem is, when I run monit status while logged in as root I get: # monit status monit: cannot read status from the monit daemon I also have monit started on boot via this /etc/inittab file entry: mo:2345:respawn:/usr/local/bin/monit -Ic /etc/monitrc I've verified that monit is running, and I'm getting email alerts anytime I either kill the monit process manually, or reboot my raspberry pi. So, next I check my monitrc file permissions to see which group is allowed access. # ls -al /etc/monitrc -rw------- 1 root root 2359 Aug 24 14:48 /etc/monitrc Here's my relevant allow section of the control file. set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 Also tried setting permissions on this file to 640 to allow group read permissions, but no matter what I try I either get the same error as noted above, or when the permissions are set to 640 I get: # monit status monit: The control file '/etc/monitrc' must have permissions no more than -rwx------ (0700); right now permissions are -rw-r----- (0640). What am I missing here? I know that the httpd must be enabled, as that's the interface that the CLI uses to get information (or so I've read), so I've done that. And in terms of monit doing its monitoring job and sending email alerts, that's all working as well. Here's my entire monitrc file - again, this is version v5.8, and it was build with both PAM and SSL support. The process runs under the root user: # Global settings set daemon 300 with start delay 5 set logfile /var/log/monit.log set pidfile /var/run/monit.pid set idfile /var/run/.monit.id set statefile /var/run/.monit.state # Mail alerts ## Set the list of mail servers for alert delivery. Multiple servers may be ## specified using a comma separator. If the first mail server fails, Monit # will use the second mail server in the list and so on. By default Monit uses # port 25 - it is possible to override this with the PORT option. # set mailserver smtp.gmail.com port 587 username [omitted] password [omitted] using tlsv1 ## Send status and events to M/Monit (for more informations about M/Monit ## see http://mmonit.com/). By default Monit registers credentials with ## M/Monit so M/Monit can smoothly communicate back to Monit and you don't ## have to register Monit credentials manually in M/Monit. It is possible to ## disable credential registration using the commented out option below. ## Though, if safety is a concern we recommend instead using https when ## communicating with M/Monit and send credentials encrypted. # # set mmonit http://monit:[email protected]:8080/collector # # and register without credentials # Don't register credentials # # ## Monit by default uses the following format for alerts if the the mail-format ## statement is missing:: set mail-format { from: [email protected] subject: $SERVICE $DESCRIPTION message: $EVENT Service: $SERVICE Date: $DATE Action: $ACTION Host: $HOST Description: $DESCRIPTION Monit instance provided by chicagomeshnet.com } # Web status page set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 ## You can set alert recipients whom will receive alerts if/when a ## service defined in this file has errors. Alerts may be restricted on ## events by using a filter as in the second example below.

    Read the article

  • after enabling mod ssl apache stops listening on port 80

    - by zensys
    I have an ubuntu 12.04 server with zend server CE installed. I now wanted to enable https but after the first steps according to the documentation, 'a2enmod ssl' and 'apache service restart', apache does not listen on 443 but neither on 80, according to netstat -tap | grep http(s)! This is what I see in my error log, but I can't make much of it: [Fri May 25 19:52:39 2012] [notice] caught SIGTERM, shutting down [Fri May 25 19:52:41 2012] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Fri May 25 19:52:41 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:52:41 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:52:41 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:52:41 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:11 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:53:11 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:53:11 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:53:11 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:12 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.8-ZS5.5.0 configured -- resuming normal operations and here is my httpd.conf: # Name based virtual hosting <virtualhost *:80> ServerName www-redirect KeepAlive Off RewriteEngine On RewriteCond %{HTTP_HOST} ^[^\./]+\.[^\./]+$ RewriteRule ^/(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </virtualhost> Alias /shared/js "/home/web/library/js" Alias /shared/image "/home/web/library/image" <IfModule mod_expires.c> <FilesMatch "\.(jpe?g|png|gif|js|css|doc|rtf|xls|pdf)$"> ExpiresActive On ExpiresDefault "access plus 1 week" </FilesMatch> </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Location /> RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] </Location> netstat -tap gives: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:mysql *:* LISTEN 765/mysqld tcp 0 0 *:pop3 *:* LISTEN 744/dovecot tcp 0 0 *:imap2 *:* LISTEN 744/dovecot tcp 0 0 *:http *:* LISTEN 19861/apache2 tcp 0 0 *:smtp *:* LISTEN 30365/master tcp 0 0 *:4444 *:* LISTEN 634/sshd tcp 0 0 *:kamanda *:* LISTEN 1167/lighttpd tcp 0 0 *:imaps *:* LISTEN 744/dovecot tcp 0 0 *:amandaidx *:* LISTEN 1167/lighttpd tcp 0 0 localhost.loc:amidxtape *:* LISTEN 19861/apache2 tcp 0 0 *:pop3s *:* LISTEN 744/dovecot tcp 0 384 mail.mysite.:4444 231.214.14.37.dyn:41909 ESTABLISHED 19039/sshd: web [pr tcp 0 0 localhost.localdo:mysql localhost.localdo:48252 ESTABLISHED 765/mysqld tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54686 TIME_WAIT - tcp 0 0 mail.mysite.:4444 231.214.14.37.dyn:42419 ESTABLISHED 19372/sshd: web [pr tcp 0 0 localhost.localdo:48252 localhost.localdo:mysql ESTABLISHED 19884/auth tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54685 TIME_WAIT - tcp6 0 0 [::]:pop3 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:imap2 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:smtp [::]:* LISTEN 30365/master tcp6 0 0 [::]:4444 [::]:* LISTEN 634/sshd tcp6 0 0 [::]:imaps [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:pop3s [::]:* LISTEN 744/dovecot Anyone knows what I am doing wrong? Perhaps I should take some additional steps to make apache listen 0n 443 but that it stops listening on 80 altogether I can't understand.

    Read the article

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

  • Performance Tuning a High-Load Apache Server

    - by futureal
    I am looking to understand some server performance problems I am seeing with a (for us) heavily loaded web server. The environment is as follows: Debian Lenny (all stable packages + patched to security updates) Apache 2.2.9 PHP 5.2.6 Amazon EC2 large instance The behavior we're seeing is that the web typically feels responsive, but with a slight delay to begin handling a request -- sometimes a fraction of a second, sometimes 2-3 seconds in our peak usage times. The actual load on the server is being reported as very high -- often 10.xx or 20.xx as reported by top. Further, running other things on the server during these times (even vi) is very slow, so the load is definitely up there. Oddly enough Apache remains very responsive, other than that initial delay. We have Apache configured as follows, using prefork: StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 And KeepAlive as: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 Looking at the server-status page, even at these times of heavy load we are rarely hitting the client cap, usually serving between 80-100 requests and many of those in the keepalive state. That tells me to rule out the initial request slowness as "waiting for a handler" but I may be wrong. Amazon's CloudWatch monitoring tells me that even when our OS is reporting a load of 15, our instance CPU utilization is between 75-80%. Example output from top: top - 15:47:06 up 31 days, 1:38, 8 users, load average: 11.46, 7.10, 6.56 Tasks: 221 total, 28 running, 193 sleeping, 0 stopped, 0 zombie Cpu(s): 66.9%us, 22.1%sy, 0.0%ni, 2.6%id, 3.1%wa, 0.0%hi, 0.7%si, 4.5%st Mem: 7871900k total, 7850624k used, 21276k free, 68728k buffers Swap: 0k total, 0k used, 0k free, 3750664k cached The majority of the processes look like: 24720 www-data 15 0 202m 26m 4412 S 9 0.3 0:02.97 apache2 24530 www-data 15 0 212m 35m 4544 S 7 0.5 0:03.05 apache2 24846 www-data 15 0 209m 33m 4420 S 7 0.4 0:01.03 apache2 24083 www-data 15 0 211m 35m 4484 S 7 0.5 0:07.14 apache2 24615 www-data 15 0 212m 35m 4404 S 7 0.5 0:02.89 apache2 Example output from vmstat at the same time as the above: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 8 0 0 215084 68908 3774864 0 0 154 228 5 7 32 12 42 9 6 21 0 198948 68936 3775740 0 0 676 2363 4022 1047 56 16 9 15 23 0 0 169460 68936 3776356 0 0 432 1372 3762 835 76 21 0 0 23 1 0 140412 68936 3776648 0 0 280 0 3157 827 70 25 0 0 20 1 0 115892 68936 3776792 0 0 188 8 2802 532 68 24 0 0 6 1 0 133368 68936 3777780 0 0 752 71 3501 878 67 29 0 1 0 1 0 146656 68944 3778064 0 0 308 2052 3312 850 38 17 19 24 2 0 0 202104 68952 3778140 0 0 28 90 2617 700 44 13 33 5 9 0 0 188960 68956 3778200 0 0 8 0 2226 475 59 17 6 2 3 0 0 166364 68956 3778252 0 0 0 21 2288 386 65 19 1 0 And finally, output from Apache's server-status: Server uptime: 31 days 2 hours 18 minutes 31 seconds Total accesses: 60102946 - Total Traffic: 974.5 GB CPU Usage: u209.62 s75.19 cu0 cs0 - .0106% CPU load 22.4 requests/sec - 380.3 kB/second - 17.0 kB/request 107 requests currently being processed, 6 idle workers C.KKKW..KWWKKWKW.KKKCKK..KKK.KKKK.KK._WK.K.K.KKKKK.K.R.KK..C.C.K K.C.K..WK_K..KKW_CK.WK..W.KKKWKCKCKW.W_KKKKK.KKWKKKW._KKK.CKK... KK_KWKKKWKCKCWKK.KKKCK.......................................... ................................................................ From my limited experience I draw the following conclusions/questions: We may be allowing far too many KeepAlive requests I do see some time spent waiting for IO in the vmstat although not consistently and not a lot (I think?) so I am not sure this is a big concern or not, I am less experienced with vmstat Also in vmstat, I see in some iterations a number of processes waiting to be served, which is what I am attributing the initial page load delay on our web server to, possibly erroneously We serve a mixture of static content (75% or higher) and script content, and the script content is often fairly processor intensive, so finding the right balance between the two is important; long term we want to move statics elsewhere to optimize both servers but our software is not ready for that today I am happy to provide additional information if anybody has any ideas, the other note is that this is a high-availability production installation so I am wary of making tweak after tweak, and is why I haven't played with things like the KeepAlive value myself yet.

    Read the article

  • WIN32_NetworkAdapterConfiguration does not report IP from PPP adapter

    - by Michael
    On a Windows 7 device, the following WMI query does not report back an enabled PPP adapter: Select Index,MACAddress,IPAddress,IPSubnet,DefaultIPGateway,DNSServerSearchOrder from Win32_NetworkAdapterConfiguration where IPEnabled=true Where ipconfig gives you all the information correctly: Windows IP Configuration PPP adapter XYZ VPN: Connection-specific DNS Suffix . : IPv4 Address. . . . . . . . . . . : 123.456.789.123 Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : IPv4 Address. . . . . . . . . . . : 192.168.178.11 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.178.1 Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Any ideas how I can script this by using WMI or VBS?

    Read the article

  • Puppet gives SSL error because master is not running?

    - by Daniel Huger
    I started with two clean machines this time. My master is running 12.04 Version: 2.7.11-1ubuntu2 Depends: ruby1.8, puppetmaster-common (= 2.7.11-1ubuntu2) My client is 10.04 Version: 2.6.3-0ubuntu1~lucid1 Depends: puppet-common (= 2.6.3-0ubuntu1~lucid1), ruby1.8 To setup Puppet tutorial: http://shapeshed.com/setting-up-puppet-on-ubuntu-10-04/ To connect master and client: http://shapeshed.com/connecting-clients-to-a-puppet-master/ The first time I tried to connect master to client failed with SSL_connect error. So I did rm -rf /etc/puppet/ssl/ to remove all the keys inside ssl folders. It looked like it work.... BUT client# puppet agent --server puppet --waitforce 60 --test /usr/lib/ruby/1.8/facter/util/resolution.rb:46: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 /usr/lib/ruby/1.8/puppet/defaults.rb:67: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 info: Creating a new SSL key for giab10 warning: peer certificate won't be verified in this SSL session info: Caching certificate for ca warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Creating a new SSL certificate request for mybox123 info: Certificate Request fingerprint (md5): XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Caching certificate for mybox123 err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed warning: Not using cache on failed catalog It cached but then it couldn't retrieve it. Let me stop here.... worrying I would mess something up. But let's check master's status. * master is not running WoW.... ??? master# service puppetmaster start * Starting puppet master [OK] master# service puppetmaster status * master is not running I think time is sync. Well, we are behind a firewall so the port to sync time is disbaled. I checked with date and they seem okay. What about master not running? Is that the cause? Any help is appreciated. Thanks! /var/lib/puppet/log/masterhttp.log [2012-06-30 00:13:25] INFO WEBrick 1.3.1 [2012-06-30 00:13:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:13:25] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:19:40] INFO WEBrick 1.3.1 [2012-06-30 00:19:40] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:19:40] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:28:58] INFO WEBrick 1.3.1 [2012-06-30 00:28:58] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:28:58] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 15:31:25] INFO WEBrick 1.3.1 [2012-06-30 15:31:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 15:31:25] WARN TCPServer Error: Address already in use - bind(2) 1 S puppet 5186 1 0 80 0 - 29410 poll_s 15:44 ? 00:00:00 /usr/bin/ruby1.8 /usr/bin/puppet master --masterport=8140 4 S root 5235 5005 0 80 0 - 2344 pipe_w 15:45 pts/0 00:00:00 grep --color=auto puppet kill -9 5186 puppet master service puppetmaster status * master is not running I always have this error, but I always ignored it. http://pastebin.com/exbpArjv What could it mean? Time sync? Package not installed? Then how could we do puppetca in the first place?

    Read the article

  • All PHP sites stopped working on IIS7, internal server error 500

    - by TimothyP
    I installed multiple drupal 7 sites using the Web Platform Installer on Windows Server 2008. Until know they worked without any problems, but recently internal server error 500 started to show up (once every so many requests), now it happens for all requests to any of the php sites. There's not much detail to go on, and nothing changed between the time when it was working and now (well nothing I know of anyway) The log file is flooded with messages such as [09-Aug-2011 09:08:04] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:20] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:22] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:51] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:56] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:57] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:12:13] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 I have tried increasing the memory limit in php.ini as such: memory_limit = 512MB But that doesn't seem to solve the problem either. This is in the global php configuration in IIS When I looked at the sites one by one, I noticed that PHP seemed to have been disabled. PHP is not enabled. Register new PHP version to enable PHP via FastCGI So I tried to register the php version again C:\Program Files\PHP\v5.3\php-cgi.exe But when I try to apply the changes I get There was an error while performing this operation Details: Operation is not valid due to the current state of the object There doesn't seem to be any other information than that. I have no idea why all of a sudden php isn't available for the sites anymore. PS: I have rebooted IIS, the server, etc... This server is hosted on amazon S3, so I gave the server some more power Update These seem to be two different issues I used memory_limit=128MB instead of memory_limit=128M Notice the "M" instead of "MB" A memory_limit of 128M was not enough, had to increase it to 512M The first issue caused internal server errors for every request. Increasing to 512MB seemed to have solved the problem for a little while, but after a while the server errors return. Note that the PHP manager inside of IIS still shows there is no PHP available for the sites (the global config does see it as available) So the problem remains unsolved

    Read the article

  • synchronization of file locations between two machines

    - by intuited
    Although similar threads have been asked on this site and its siblings before, I've not managed to glean the answer to this persistent question. Any help is much appreciated. The situation: I've got two laptops; both contain a ton of music. Sometimes I move these music files to different locations, or change the metadata in them, or convert them to a different format. I might do any of these things on either machine. I rarely do all of them at once — ie it's unlikely that I'll convert a file's format and move it to a different location all in one go. I'd like to be able to synchronize these changes without having to sift through everything that was renamed or moved. I'm familiar with rsync but I find it inadequate, because although it can compute checksums, it doesn't have any way to store them. So if a file differs, it can't figure out which side it changed on. This also means that it can't attempt to match a missing file to a new one with the same checksum (ie a move) if the filesize and date are the same, it , so it takes an epoch to do a sync on a large repository. I would like to only check the checksum if the files even if you turn on checksumming, it still doesn't use it intelligently: ie it checksums files even if the sizes differ. IIRC. it's not able to use file metadata as a means of file comparison. this is sort of a wishlist item but it seems doable. I've also looked into rsnapshot, but its requirement to create a full backup is impractical in this situation. I don't need a backup, I just need a record of what file with each hash was where when. Unison seems like it might be able to do something vaguely along these lines, but I'm loathe to spend hours wading through its details only to discover that it's sadly lacking. Plus, it's fun asking questions on here. What I'd like is a tool that does something along these lines: keeps track of file checksums or of actual renames, possibly using inotify to greatly reduce resource consumption/latency stores a database containing this info, along with other pertinencies like the file format and metadata, the actual inode, the filename history, etc. uses this info to provide more-intelligent synchronization with a counterpart on the other side. So for example: if a file has been converted from flac to ogg, but kept the same base filename, or the same metadata, it should be able to send the new version over, and the other side should delete the original. Probably it should actually sequester it somewhere in case they or you screwed up, but that's a detail. And then when the transaction is done, the state is logged so that the next time the two interact they can work out their differences. Maybe all this metadata stuff is a fancy pipe dream. I would actually be pretty happy if there was something out there that could just use checksums in an intelligent way. This would be sort of like having the intelligence of something like git, minus the need to duplicate data in an index/backup/etc (and branching, and checkouts, and all the other great stuff that RCSs do. basically just fast forward commit pushes are all I want, with maybe the option to roll back.) So is there something out there that can do this? If not, can someone suggest a good way to start making it?

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • Bridged virtual interface is not available or visible to ifconfig.

    - by Omniwombat
    Hello all. I'm running Ubuntu 9.04, kernel 2.6.28-18, and vmware-server 2.0.1. I'm attempting to setup a virtual linux machine to use a bridged interface rather than NAT or host-only. Both NAT and host-only work just fine. When running vmware-config.pl, I set /dev/vmnet0 to bridge eth0, /dev/vmnet1 to host-only, and /dev/vmnet8 to NAT. When I run ifconfig -a I see the physical interface (eth0), vmnet1 and vmnet8 both of which are up and have IP addresses assigned to them. I also see other various interfaces that are not relevant here. In the web console, when I ask that the guest machine's network card be bridged, it states that a bridged setup is "Not available" and shows the disabled device icon. Inside the guest machine, I do have an eth0 interface which I can set to anything I like, however it can't see my external network, or the host. I do see errors in my vmware/hostd.log which state: "The network bridge on device vmnet0 is not running. The virtual machine will not be able to communicate with the host or with other machines on your network" which confirms the problem. vmnet-bridge is running, and I see the following in my process table: /usr/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid -n 0 -i eth0 I confirm that the /var/run/vmnet-bridge-0.pid file is there and that it points to the correct process. I saw this question relating to Ubuntu 9.04 and bridged interfaces, in which the poster determined that the vsock library was not getting built due to a flaw in the vmware-config.pl script. I applied the patch, reran the script, and confirm that vsock.ko and vsock.o are in my /lib directory structure. vsock does show up in an lsmod. My /etc/vmware directory has /vmnet1 and /vmnet8 subdirectories. They contain configuration utilities for running DHCP and nat type services as expected. There is no vmnet0 subdirectory. My /etc/vmware/netmap.conf file DOES show entries for vmnet0; both the name and the device as I configured it from the script. My /dev directory contains devices vmnet0 through vmnet9. They have major device number 119, and minor device numbers 0 through 9. /proc/net/dev shows statistics for vmnet1 and vmnet8, but not vmnet0. I have a /proc/vmnet directory, but it's empty. When I start or stop the vmware service with /etc/init.d/vmware start, I see the following: Starting VMware services: Virtual machine monitor done Virtual machine communication interface done VM communication interface socket family: done Virtual ethernet done Bridged networking on /dev/vmnet0 done Host-only networking on /dev/vmnet1 (background) done DHCP server on /dev/vmnet1 done Host-only networking on /dev/vmnet8 (background) done DHCP server on /dev/vmnet8 done NAT service on /dev/vmnet8 done VMware Server Authentication Daemon (background) done Shared Memory Available done Starting VMware management services: VMware Server Host Agent (background) done VMware Virtual Infrastructure Web Access Starting VMware autostart virtual machines: Virtual machines done Nothing appears to be wrong there. What n00b thing am I doing such that vmnet0 and only vmnet0 does not show up in the interface list?

    Read the article

  • Application Does Not Start in Windows 7

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fresh from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried running sfc /scannow from the command line (with administrator privileges), shutting down the Spybot Resident, and disabling the firewall and virus scanner. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Faulting application name: SSDFresh.exe, version: 1.0.0.0, time stamp: 0x4f2a45d8 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x000007ff0016dbba Faulting process id: 0x994 Faulting application start time: 0x01cd11fd9fe978df Faulting application path: C:\Program Files (x86)\SSD Fresh\SSDFresh.exe Faulting module path: unknown Report Id: dfeed551-7df0-11e1-a2c7-002522c47ec0 Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Application: SSDFresh.exe Framework Version: v4.0.30319 Description: The process was terminated due to an unhandled exception. Exception Info: System.NullReferenceException Stack: at AbBugReporter.BugForm.InitLanguage() at AbBugReporter.BugForm..ctor(AbFlexTrans.LanguageInfo, AbBugReporter.BugReportManager, Boolean) at AbBugReporter.BugReportManager.Show(System.Exception) at SSDFresh.App.App_DispatcherUnhandledException(System.Object, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs) at System.Windows.Threading.Dispatcher.CatchException(System.Exception) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(System.Object, System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.WrappedInvoke(System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.InvokeImpl(System.Windows.Threading.DispatcherPriority, System.TimeSpan, System.Delegate, System.Object, Int32) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr, Int32, IntPtr, IntPtr) at MS.Win32.UnsafeNativeMethods.DispatchMessage(System.Windows.Interop.MSG ByRef) at System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) at System.Windows.Application.RunInternal(System.Windows.Window) at System.Windows.Application.Run() at SSDFresh.App.Main() Error 4/2/2012 11:35:39 PM SideBySide 59 None Activation context generation failed for "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe".Error in manifest or policy file "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe.Config" on line 0. Invalid Xml syntax. Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >