Search Results

Search found 15387 results on 616 pages for 'os imaging'.

Page 577/616 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • VMware Player loses internet connectivity

    - by Martha
    Periodically, the internet simply stops working in my virtual machine, and the only way I can get it working again is to restart the host computer. Since I use the virtual machine specifically for testing web pages, this is, shall we say, a bother. Details: I have Windows XP Pro running in VMware Player (v. 3.0.0 build-203739) on a Windows 7 host. It's set to NAT (shared IP address) because the firewall won't allow a bridged connection. Every couple of days or so, first the internet slows down to a crawl, then eventually it stops working altogether. Both VMWare and the virtual OS report that they are connected, everything looks just peachy, I can reach the internet from the host, but on the VM, all web pages time out and/or report that the server could not be found. (Browser-independent; tried with IE, FF, Chrome, Safari, and Opera.) When this happens, the only way I've found to restore the internet connectivity is to restart the host machine. Restarting the VM doesn't help, nor does refreshing network connections on either the host or the guest. (Although I'm not entirely sure I've found the proper way to refresh a network connection in Windows 7...) I have not noticed any predictability about when the problem occurs, i.e. it's not immediately after I do anything special. It seems to occur mostly after putting the host to sleep once or twice, but it has happened even if the host has been in continuous use. It also seems independent of when I start using the VM - sometimes, I wake up the VM and the internet is really slow in it, then eventually stops working altogether; other times, I wake up the VM, use it perfectly happily for a while, then suddenly the internet is gone. Does anyone know why this is occurring? Failing that, is there a workaround that's less drastic than restarting the host? (Windows 7 startup times are blazingly fast compared to previous versions of Windows, but it's still a hassle to close all my programs and reopen them again.) Edit: while badges overall are nice, the Tumbleweed badge isn't helping me to solve my problem. Hasn't anyone encountered anything even remotely similar?

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • /server-status shows over 240 requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy c

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 240 requests from the 'localhost', like these. 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections. UPDATE 2010/04/16 It is 8 hours later. The /server-status page still shows that there are 243 lines which say "www.example.gov OPTION *". I believe these connections are not active. The server is mostly idle (1 requests currently being processed, 9 idle workers). There are only 18 active httpd processes on the Unix host. If these connections are not active, why do they show up under /server-status? I would have expected them to expire a few minutes after they were initialized.

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • Lion built-in VPN client times out connecting to Windows 2003 PPTP server

    - by beporter
    I have a new iMac with OS X 10.7 (Lion) on it that refuses to connect to a PPTP-based VPN server (running Windows 2003 SBS). To shortcut past a lot of questions: There is a Dell workstation running Windows 7 on the same LAN as the Mac that is able to establish a PPTP connection to the same VPN server using the same credentials. That would seem to rule out any possible problems with the server, the port forwards on the server's firewall, the internet connection between the two, and the router local to the Dell and iMac. Here's a "verbose" dump of the PPP log from the iMac: Tue Sep 6 10:13:11 2011 : using link 0 Tue Sep 6 10:13:11 2011 : Using interface ppp0 Tue Sep 6 10:13:11 2011 : Connect: ppp0 socket[34:17] Tue Sep 6 10:13:11 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0, interfaceIndex: 0, Protocol: None, Private Port: 0, Public Address: 45f6f181, Public Port: 0, TTL: 0. Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 inconsistent. is Connected: 1, Previous interface: 4, Current interface 0 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 initialized. is Connected: 1, Previous publicAddress: (0), Current publicAddress 45f6f181 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 fully initialized. Flagging up Tue Sep 6 10:13:14 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:17 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:20 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:23 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:26 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:29 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:32 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:35 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:38 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:41 2011 : LCP: timeout sending Config-Requests Tue Sep 6 10:13:41 2011 : Connection terminated. Tue Sep 6 10:13:41 2011 : PPTP disconnecting... Tue Sep 6 10:13:41 2011 : PPTP clearing port-mapping for en0 Tue Sep 6 10:13:41 2011 : PPTP disconnected The error seems to be focused around the line, LCP: timeout sending Config-Requests, but I haven't had any luck in finding troubleshooting information for this. I've tried completely deleting the entire VPN "connection" from the Network prefpane and recreating it from scratch. I am certain the connection details are correct because they exactly match what successfully connects from the Win7 machine sitting next to the iMac. Any suggestions?

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Problem virtualizing Ubuntu 10.04 32 bit on VirtualBox 3.1 on Windows Vista 64 bit

    - by Adam Siddhi
    Software & Hardware Setup Host System : Windows Vista Home Premium SP1 64 bit Guest : Ubuntu 10.04 (ubuntu-10.04-desktop-i386.iso) 32 bit VM : VirtualBox 3.1.8 Hardware : Intel Core 2 Duo T6400 4GB SDRAM What Happened I followed the tutorial called Installing Ubuntu inside Windows using VirtualBox located here: www.psychocats.net/ubuntu/virtualbox At first I downloaded ubuntu-10.04-desktop-amd64.iso because I figured that it would be a perfect fit with my Vista 64 OS. I was wrong because it turns out the my Intel Core 2 Duo T6400 CPU does not have Intel® Virtualization Technology. So I had to go with the ubuntu-10.04-desktop-i386.iso which is 32 bit. This got me to the point where I could actually create the Ubuntu VM. So I set up the VM in VirtualBox (according to the tutorial I was following) to prepare for the Ubuntu 10.04 virtualization. Please go to my Picassa web album to see the screen shots of my VM settings and Ubuntu boot process so you can see what I experienced (they appear in the order that I experienced them in). www.picasaweb.google.com/rubysiddhi/ProblemVirtualizingUbuntu100432BitOnVirtualBox31OnWindowsVista64# The first 17 images show the VM settings. The last 8 show my attempt at virtualizing Ubuntu 10.04. You can see booting up but ultimately failing. The Specifics The one error message I got was: (process:210): GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0) It appeared on a black screen that sort of looked like a Windows console screen but with out the c:\ or the ability to type. Then this error message got more complex when tons of text appeared in the screen. Pictures 23 - 25 in the album show this text. I should also mention that I found this post in the Ubuntu forums by zonination who seemed to have similar problems to mine even though they had a different set up. The main issue I think zonination and me may be having is the fact that we can not change the color mode to 32 bit while it is booting. I think the 16 bit color mode maybe making Ubuntu fail. Not certain though. Well I hope I explained my problem thoroughly and clearly. Thanks for the tutorial. It got me started but, now I hope to finish this process so I can start developing in Ubuntu. OH by the way if you want to actually see what happened play by play (with some classical in the background) check out the video I made over here: http://www.youtube.com/watch?v=XMbbm5E_0Xw Thanks! Regards, Adam

    Read the article

  • New Computer Build Questions

    - by MJ
    I'm in the process of gathering parts and specs for a new machine. I wear many hats, so the machine needs to do a lot. I need at least 2 monitor support, if not three. I also play many online MMOs (wow, aion, war hammer, etc), along with some freelance programming projects. I already have a case which is very large, so it will fit anything. I have 2 other SATA HDs. They are more for storage and basic programs. I feel that the best improvement could be done with a solid state HD, true or not? I'm more of a software/programming guy, so ANY input at all on improving this system build would be appreciated. I have a few questions with this list. AMD or Intel? I don't know enough about either to choose what would best fit me. Thanks! **EDIT: Thanks for the input everyone! Here are some answers: I do a lot of programming and gaming, so I do need things for both. The newer video card covers the gaming aspect, as well as allowing me to have many monitors. (hopefully upgrade to dual 30' or more) I don't need any additional HDs at this time. I have a SATA 160g and 120g from my previous computer, and a NAS system with over 2TB of storage on the homenetwork. I just wanted a fast HD for OS/programs/games. With the memory. I have used G.SKILL before in 2 system builds. It's done excellent for me in them. Very stable. **EDIT2: Made some additional changes. Lowered the power supply down to 750, which saves me more $$. Also changed the SSD to 2 WD 650G HDs. Thinking of doing a CPU upgrade to the 3.4GHZ AMD Phenom II X4 965 Black Edition Deneb 3.4GHz System Specs - Budget:$1500 CPU: AMD Phenom II X4 955 Black Edition Deneb 3.2GHz MB: GIGABYTE GA-MA790GPT-UD3H AM3 AMD 790GX HDMI ATX Memory: G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 1066 Video: DIAMOND 5870PE51G Radeon HD 5870 (Cypress XT) 1GB 256-bit GD Power Supply: XCLIO GREATPOWER 1000W ATX12V SLI Ready CrossFire Ready HD:Intel X25-M Mainstream SSDSA2MH080G2C1 2.5" 80GB SATA II MLC Changes: Power Supply: CORSAIR CMPSU-750TX 750W ATX12V / EPS12V HD: 2x Western Digital Caviar Blue WD6400AAKS 640GB CPU: AMD Phenom II X4 965 Black Edition Deneb 3.4GHz

    Read the article

  • Can't ssh tunnel to access a remote mysql server

    - by hobbes3
    I can't seem to figure out why I can't use ssh tunnel to connect to my remote MySQL server. I do ssh tunnel with [hobbes3@hobbes3] ~ $ ssh linode -L 3307:localhost:3306 Then on another terminal, I try [hobbes3@hobbes3] ~ $ mysql -h localhost -P 3307 -u root --protocol=tcp -p Enter password: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2 On the server, it shows this: root@li534-120 ~ # channel 4: open failed: connect failed: Connection refused Here is my my.cnf on the server: [mysqld] # Settings user and group are ignored when systemd is used (fedora >= 15). # If you need to run mysqld under different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd user=mysql datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Semisynchronous Replication # http://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html # uncomment next line on MASTER ;plugin-load=rpl_semi_sync_master=semisync_master.so # uncomment next line on SLAVE ;plugin-load=rpl_semi_sync_slave=semisync_slave.so # Others options for Semisynchronous Replication ;rpl_semi_sync_master_enabled=1 ;rpl_semi_sync_master_timeout=10 ;rpl_semi_sync_slave_enabled=1 # http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html ;performance_schema [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqld] port = 3306 socket=/var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 64M max_allowed_packet = 128M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache = 8 max_connections = 25 query_cache_size = 16M table_open_cache = 1024 table_definition_cache = 1024 tmp_table_size = 32M max_heap_table_size = 32M bind-address = 0.0.0.0 Now sure if this helps but here is the MySQL user list: mysql> select * from mysql.user; +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | localhost | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | 127.0.0.1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | ::1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ 3 rows in set (0.00 sec) I read about how MySQL treats localhost vs 127.0.0.1 as connecting via a socket or TCP, respectively. But I'm starting to get confused on what's really going on or if socket vs TCP is even the issue. Thanks in advance and I'm open for any tips and suggestions! Some more info: My MySQL client, running OS X 10.8.4, is mysql Ver 14.14 Distrib 5.6.10, for osx10.8 (x86_64) using EditLine wrapper My MySQL server, running on CentOS 6.4 32-bit, is mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+--------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------+ | innodb_version | 1.1.8 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.28 | | version_comment | MySQL Community Server (GPL) by Remi | | version_compile_machine | i686 | | version_compile_os | Linux | +-------------------------+--------------------------------------+ 7 rows in set (0.00 sec)

    Read the article

  • Accidentally Removed Permissions for the XP SP3 Registry key HKEY_CLASSES_ROOT! Workaround Please!

    - by Praveen Kumar
    This is Praveen and I am using Microsoft Windows XP SP3 Build 2600. I had problems with using Microsoft Office 2010. It was keeping on saying, "Please wait while windows configures Microsoft Office Professional Plus 2010". After seeing this link: http://social.answers.microsoft.com/Forums/en-US/outlookcontact/thread/6e9c3f2e-010a-4b74-b433-0c41548ee468?prof=required I thought of giving the Registry Key: HKEY_CLASSES_ROOT, full access permissions for Everyone. I went to the registry, right-clicked on HKEY_CLASSES_ROOT and clicked on Permissions. I added Everyone and gave Full Control to the ACL and clicked on Apply. Also I checked "Replace permission entries on all child objects with entries shown here that apply to child objects." After a long time, it said with an error, cannot replace for few entries. Now, the key, HKEY_CLASSES_ROOT has no access to any user. My system is not starting up. Some of my friends asked me to try the Last Known Good Configuration. Even that did not work out. When I tried to open in the Safe Mode, I got only one driver to load and even that didn't load well. Another friend suggested me to put the Setup disk and reinstall the OS. I tried that and after completing the "Installing Device Drivers" part, when it started "Installing Network", the system is getting restarted. Now, the setup is also half way through and even if I open in Safe Mode to try System Restore, it popped up a message saying, "Setup cannot run under Safe Mode. Setup will restart now." and the system is restarting. All my official files and my software, which I developed for Registry Security, resides in my system now. I am unable to access the system and I want it to be working to submit the project, as the deadline is this week. I had no better solutions from elsewhere. Can anyone please help me out with this issue. If it is possible to open the registry editor's stored file from another system and restore the access permissions, I hope it would solve the problem. Please do help me. Thanking You, Praveen.

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Ububtu server 12.04 auto installation freezes at kickseeding running if ks.cfg has post scripts

    - by john206
    I'm trying to make a custom Ubuntu Server iso file. Kickstart file (ks.cfg) runs smooth when there is no %post in the file and Ubuntu installs correctly with ks configuration. Installation finishes installing base, apt, grub and It echos: Kickseed Running... and it freezes @ 0% I thought may be apt-get update doesnt work in ks file, I tried to install other apps like apache2 but no luck I have created dozen iso images and installed them in Virtual Box.I have been googling for 3 days and checked out ubuntu forums but haven't figured out the issue. I appreciate your help. This is how I made the iso image. My ks.file and txt.cfg files located in isolinux directory: root@ubuntu:/home/work mount -o loop ubuntu-12.04-amd64.iso original-iso/ rsync -a original-iso/ custom-iso/ cp ks.cfg custom-iso/isolinux/ cp txt.cfg custom-iso/isolinux/ chmod -R 777 custom-iso/ #Creating Iso image mkisofs -D -r -V “$IMAGE_NAME” -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ~/ubuntu-12.04-alternate-custom-amd64.iso custom-iso/ ks.cfg #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone America/Los_Angeles #Root password rootpw --iscrypted somethingsomething #Initial user user ubuntu --fullname "ubuntu" --iscrypted --password somethingsomething. #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use CDROM installation media cdrom #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --size 128 --fstype=ext3 --asprimary part / --size 512 --fstype=ext3 --asprimary part swap --size 512 part /tmp --size 512 --fstype=ext3 part /var --size 512 --fstype=ext3 part /usr --size 4096 --fstype=ext3 part /home --size 2048 --fstype=ext3 #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --disabled --http --ftp --ssh #X Window System configuration information xconfig --depth=32 --resolution=1024x768 --defaultdesktop=GNOME %post apt-get update mkdir /home/user txt.cfg default autoinstall label autoinstall menu label ^Install Custom Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet ks=cdrom:/isolinux/ks.cfg -- label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet -- label cloud menu label ^Multiple server install with MAAS kernel /install/vmlinuz append modules=maas-enlist-udeb vga=788 initrd=/install/initrd.gz quiet -- label check menu label ^Check disc for defects kernel /install/vmlinuz append MENU=/bin/cdrom-checker-menu vga=788 initrd=/install/initrd.gz quiet -- label memtest menu label Test ^memory kernel /install/mt86plus label hd menu label ^Boot from first hard disk localboot 0x80

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • OSX 10.6.6 SSH md5 break-in check

    - by Alex
    Information Recently one of the linux servers that I access was compromised to steal passwords and ssh keys using a modified ssh binary. This lead me to question if the attacker had compromised my OSX Laptop which had ssh access turned on. A sophos virus scan turned up nothing, and I did not have rkhunter installed before the attack, so I could not compare hashes of the system binaries to be sure. However because OSX is relatively standard for each of their major releases, I asked fiends for md5 hashes md5 /usr/bin/ssh and md5 /usr/sbin/sshd as a basic first check to see if there was anything different about my machine. A few emails later I have found the following data: Version (Arch) [N] MD5 (/usr/bin/ssh) MD5 (/usr/sbin/sshd) OSX 10.5.8 (PPC) [3] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.5.8 (intel) [5] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.6.x (intel) [7] 591fbe723011c17b6ce41c537353b059 e781fad4fc86cf652f6df22106e0bf0e OSX 10.6.x (intel) [4] 58be068ad5e575c303ec348a1c71d48b 33dafd419194b04a558c8404b484f650 Mine 10.6.6 (intel) df344cc00a294c91230c65e8b7332a79 b5094ccf4cd074aaf573d4f5df75906a where N is the number of machines with with that MD5, and the last row is my laptop. The sample is relatively heterogeneous spaning a few years of different makes and models of Apples, and different versions of 10.6.x. The different hash for my system made me worried that these binaries might have been compromised. So I made sure that my backup for the week was good, and dived into formatting my system and reinstalling OSX. After reinstalling OSX from the manufacturer DVD, I found that the MD5 hash did not change for either ssh, or sshd. Goal Make sure that my system is does not have any malicious software. Should I be worried that this base install of OSX (with no other software installed) has been compromised? I have also updated my system to 10.6.6 and found no change as well. Other Information I am not sure if this is helpful information, but my laptop is a i7 15 inch MacBook Pro bought in Nov 2010, and here is some output from system_profiler: System Software Overview: System Version: Mac OS X 10.6.6 (10J567) Kernel Version: Darwin 10.6.0 64-bit Kernel and Extensions: No Time since boot: 1:37 Hardware: Hardware Overview: Model Name: MacBook Model Identifier: MacBook6,2 Processor Name: Intel Core i7 Processor Speed: 2.66 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache (per core): 256 KB L3 Cache: 4 MB Memory: 4 GB Processor Interconnect Speed: 4.8 GT/s Boot ROM Version: MBP61.0057.B0C SMC Version (system): 1.58f16 Sudden Motion Sensor: State: Enabled On the laptop, I find: $ codesign -vvv /usr/bin/ssh /usr/bin/ssh: valid on disk /usr/bin/ssh: satisfies its Designated Requirement $ codesign -vvv /usr/sbin/sshd /usr/sbin/sshd: valid on disk /usr/sbin/sshd: satisfies its Designated Requirement $ ls -la /usr/bin/ssh -rwxr-xr-x 1 root wheel 1001520 Feb 11 2010 /usr/bin/ssh $ ls -la /usr/sbin/sshd -rwxr-xr-x 1 root wheel 1304800 Feb 11 2010 /usr/sbin/sshd $ ls -la /sbin/md5 -r-xr-xr-x 1 root wheel 65232 May 18 2009 /sbin/md5 Update So far I have not gotten an answer about this question, but if you could help by increasing the number of hashes that I can compare against, that would be great. To get hashes, and version numbers, run the following on osx: md5 /usr/bin/ssh md5 /usr/sbin/sshd ssh -V sw_vers

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • NRPE Warning threshold must be a positive integer

    - by Frida
    OS: Ubuntu 12.10 Server 64bits I've installed Icinga, with ido2db, pnp4nagios and icinga-web (last release, following the instruction given in the documentation, installation with apt, etc). I am using icinga-web to monitor my hosts. For the moment, I have just my localhost, and all is perfect. I am trying to add a host and monitor it with NRPE (version 2.12): root@server:/etc/icinga# /usr/lib/nagios/plugins/check_nrpe -H client NRPE v2.12 The configuration looks good. I've created a file in /etc/icinga/objects/client.cfg as below on the server: root@server:/etc/icinga/objects# cat client.cfg define host{ use generic-host ; Name of host template to use host_name client alias client.toto address xx.xx.xx.xx } # Service Definitions define service{ use generic-service host_name client service_description CPU Load check_command check_nrpe_1arg!check_load } define service{ use generic-service host_name client service_description Number of Users check_command check_nrpe_1arg!check_users } And add in my /etc/icinga/commands.cfg: # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$ } # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe_1arg command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } But it does not work. These are the logs from the client: Dec 3 19:45:12 client nrpe[604]: Connection from xx.xx.xx.xx port 32641 Dec 3 19:45:12 client nrpe[604]: Host address is in allowed_hosts Dec 3 19:45:12 client nrpe[604]: Handling the connection... Dec 3 19:45:12 client nrpe[604]: Host is asking for command 'check_users' to be run... Dec 3 19:45:12 client nrpe[604]: Running command: /usr/lib/nagios/plugins/check_users -w -c Dec 3 19:45:12 client nrpe[604]: Command completed with return code 3 and output: check_users: Warning t hreshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:45:12 client nrpe[604]: Return Code: 3, Output: check_users: Warning threshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx port 32129 Dec 3 19:44:49 client nrpe[32582]: Host address is in allowed_hosts Dec 3 19:44:49 client nrpe[32582]: Handling the connection... Dec 3 19:44:49 client nrpe[32582]: Host is asking for command 'check_load' to be run... Dec 3 19:44:49 client nrpe[32582]: Running command: /usr/lib/nagios/plugins/check_load -w -c Dec 3 19:44:49 client nrpe[32582]: Command completed with return code 3 and output: Warning threshold mu st be float or float triplet!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLO AD15 Dec 3 19:44:49 client nrpe[32582]: Return Code: 3, Output: Warning threshold must be float or float trip let!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLOAD15 Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx closed. Have you any ideas?

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • Error installing scipy on Mountain Lion with Xcode 4.5.1

    - by Xster
    Environment: Mountain Lion 10.8.2, Xcode 4.5.1 command line tools, Python 2.7.3, virtualenv 1.8.2 and numpy 1.6.2 When installing scipy with pip install -e "git+https://github.com/scipy/scipy#egg=scipy-dev" on a fresh virtualenv. llvm-gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return error: Command "/usr/bin/llvm-gcc -fno-strict-aliasing -Os -w -pipe -march=core2 -msse4 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/xiao/.virtualenv/lib/python2.7/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 Is it supposed to be looking for headers from my system frameworks? Is the development version of scipy no longer good for the latest version of Mountain Lion/Xcode?

    Read the article

  • Configure Postfix to Port other than 25

    - by bwheeler96
    I've done quite a bit of googling on how to reconfigure postfix to work on a different port, but I still can't fond the line(s) people keep talking about in my master.cf. I'm using OS X Mountain Lion, and my ISP blocks traffic both ways on port 25. people have said to look for a line that says smtp inet n - n - - smtpd I can't find it. This is (what I believe to be) unmodified # ==== Begin auto-generated section ======================================== # This section of the master.cf file is auto-generated by the Server Admin # Mail backend plugin whenever mails settings are modified. smtp inet n - n - 1 postscreen smtpd pass - - n - - smtpd dnsblog unix - - n - 0 dnsblog tlsproxy unix - - n - 0 tlsproxy submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt smtp unix - - n - - smtp # === End auto-generated section =========================================== # Modern SMTP clients communicate securely over port 25 using the STARTTLS command. # Some older clients, such as Outlook 2000 and its predecessors, do not properly # support this command and instead assume a preconfigured secure connection # on port 465. This was sometimes called "smtps", but such usage was never # approved by the IANA and therefore conflicts with another, legitimate assignment. # For more details about managing secure SMTP connections with postfix, please see: # http://www.postfix.org/TLS_README.html # To read more about configuring secure connections with Outlook 2000, please read: # http://support.microsoft.com/default.aspx?scid=kb;en-us;Q307772 # Apple does not support the use of port 465 for this purpose. # After determining that connecting clients do require this behavior, you may choose # to manually enable support for these older clients by uncommenting the following # four lines. #465 inet n - n - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - n - - smtp pickup fifo n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify sacl-cache unix - - n - 1 sacl-cache flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants.

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >