Search Results

Search found 15388 results on 616 pages for 'status bar'.

Page 121/616 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • How can I make the Windows 7 taskbar behave like a cross between the old Quick Launch and new Superbar?

    - by frumious
    I really like the taskbar in Windows 7, I think combining buttons to launch apps and the icons that show your running apps is groovy. However, because I like having as much space as possible, I've got small icons enabled and shrunk the bar down to one row. I've also told it not to group the running apps unless there's no space left (to save me having to work harder to find the particular window I want), which also means that they have captions, and are thus quite wide. The (admittedly small) problem this gives me is that I can pin all my favourite apps to the bar, which looks much like the old Quick Launch bar, but when I launch them the running apps because much wider, and the unlaunched apps get lost amongst them. I can manually change the order to fix this, but next time I'll launch a different app and I'll be back to square one. What I'd prefer is for small unlaunched icons to be kept on the left, and wider running apps to move over to the right, which for me would be the best of both worlds. Is there any way I can organise that? I'm aware that one can use the traditional quick launch bar in Windows 7, but that's not what I'm after; I generally prefer the Windows 7 way.

    Read the article

  • Why is SSH finding remote keys for other accounts?

    - by Brian Pontarelli
    This is a strange issue I'm having with SSH from my Macbook Pro to a Linux (Ubuntu 11.10) server. I have a DSA key setup on the remote Linux server under my home directory like this: /home/me/.ssh/authorzied_keys I also have the same DSA key setup for a few other accounts on the machine named "foo" and "bar". I can log into all of the accounts fine without any password. Therefore, the DSA keys are all setup correctly. The strange behavior I'm seeing is when debugging the SSH connection. During the connection, the SSH debug is outputting this: debug2: key: /Users/me/.ssh/id_dsa (0x7f91a1424220) debug2: key: /home/foo/.ssh/id_dsa (0x7f91a1425620) debug2: key: /home/bar/.ssh/id_rsa (0x7f91a1425c60) debug2: key: /Users/me/.ssh/id_rsa (0x0) This is strange for so many reasons, but essentially, why is SSH listing out keys on the server (/home/foo/.ssh/id_dsa and /home/bar/.ssh/id_rsa)? These files don't even exist on the server, so why are they listed? I'm not logging into the "foo" or "bar" accounts, so why is SSH even listing those? On my Macbook Pro, I only have a DSA key, but SSH is listing out an RSA key, what's that all about? Another user on the server doesn't get any of these messages when they log in and they have the exact same setup for their DSA key and the exact same Macbook Pro setup as mine? Does anyone know what these messages are and why SSH is outputting them?

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    Hi All, I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS ******* Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • ipmi - can't ping or remotely connect

    - by Fidel
    I've tried configuring the IPMI controller to accept remote connections, but I can't even ping it. Here is it status: #/usr/local/bin/ipmitool lan print 2 Set in Progress : Set Complete Auth Type Support : NONE PASSWORD Auth Type Enable : Callback : : User : NONE PASSWORD : Operator : PASSWORD : Admin : PASSWORD : OEM : IP Address Source : Static Address IP Address : 192.168.1.112 Subnet Mask : 255.255.255.0 MAC Address : 00:a0:a5:67:45:25 IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10 BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Enabled Gratituous ARP Intrvl : 8.0 seconds Default Gateway IP : 192.168.1.1 Default Gateway MAC : 00:00:00:00:00:00 802.1q VLAN ID : Disabled 802.1q VLAN Priority : 0 RMCP+ Cipher Suites : 0,1,2,3 Cipher Suite Priv Max : uaaaXXXXXXXXXXX : X=Cipher Suite Unused : c=CALLBACK : u=USER : o=OPERATOR : a=ADMIN : O=OEM # /usr/local/bin/ipmitool user list 2 ID Name Enabled Callin Link Auth IPMI Msg Channel Priv Limit 1 true false true true USER 2 admin true false true true ADMINISTRATOR # /usr/local/bin/ipmitool channel getaccess 2 2 Maximum User IDs : 5 Enabled User IDs : 2 User ID : 2 User Name : admin Fixed Name : No Access Available : callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR # /usr/local/bin/ipmitool channel info 2 Channel 0x2 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : multi-session Active Session Count : 0 Protocol Vendor ID : 7154 Volatile(active) Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available Non-Volatile Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available # /usr/local/bin/ipmitool chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : unknown Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # arp Address HWtype HWaddress Flags Mask Iface 192.168.1.112 ether 00:A0:A5:67:45:25 C bond0 # /usr/local/bin/ipmitool -I lan -H 192.168.1.112 -U admin -P admin chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status In summary. It exists on the ARP list so arp's are being broadcast. I can't ping it and can't connect to it. Can anyone spot any glaring mistakes in the configuration? Many thanks, Fidel

    Read the article

  • Is this hard disk dead?

    - by Korjavin Ivan
    Not sure, is this right site for this Q, but let me try Last time i have problem with hard disk. Sometimes its do strange sound, and i get it from logs: $dmesg | grep ata4 [29409.945516] ata4.00: exception Emask 0x10 SAct 0xf SErr 0x90202 action 0xe frozen [29409.945529] ata4.00: irq_stat 0x00400000, PHY RDY changed [29409.945538] ata4: SError: { RecovComm Persist PHYRdyChg 10B8B } [29409.945546] ata4.00: failed command: READ FPDMA QUEUED [29409.945562] ata4.00: cmd 60/30:00:56:22:5f/00:00:00:00:00/40 tag 0 ncq 24576 in [29409.945573] ata4.00: status: { DRDY } [29409.945580] ata4.00: failed command: READ FPDMA QUEUED [29409.945594] ata4.00: cmd 60/18:08:8e:22:5f/00:00:00:00:00/40 tag 1 ncq 12288 in [29409.945605] ata4.00: status: { DRDY } [29409.945611] ata4.00: failed command: READ FPDMA QUEUED [29409.945625] ata4.00: cmd 60/08:10:46:02:66/00:00:00:00:00/40 tag 2 ncq 4096 in [29409.945635] ata4.00: status: { DRDY } [29409.945641] ata4.00: failed command: READ FPDMA QUEUED [29409.945656] ata4.00: cmd 60/80:18:ee:04:66/00:00:00:00:00/40 tag 3 ncq 65536 in [29409.945666] ata4.00: status: { DRDY } [29409.945679] ata4: hard resetting link [29413.976083] ata4: softreset failed (device not ready) [29413.976097] ata4: applying SB600 PMP SRST workaround and retrying [29414.148070] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [29414.184986] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243280] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243292] ata4.00: configured for UDMA/133 [29414.243324] ata4: EH complete [680674.804563] ata4: exception Emask 0x50 SAct 0x0 SErr 0x90a02 action 0xe frozen [680674.804575] ata4: irq_stat 0x00400000, PHY RDY changed [680674.804584] ata4: SError: { RecovComm Persist HostInt PHYRdyChg 10B8B } [680674.804603] ata4: hard resetting link [680678.840561] ata4: softreset failed (device not ready) Is this ata4 sata hard drive dead? Must i change it ASAP ? Need I specify more info?

    Read the article

  • QMail do not delivery to remote mailboxes for my own domain

    - by lorenzo-s
    Sorry for the title, I don't know how to sum up this situation. I have a web server at mydomain.com, running qmail for website related mail delivery (i.e. newsletter, sign up confirmation, etc). qmail here is used only to send mails, because I have a fully working Google App Gmail associated with mydomain.com for normal email receiving. qmail runs fine when sending email to remote addresses, for example to [email protected], but fails when sending to [email protected]. I think it's because the server thinks that he have to manage mailboxes for mydomain.com locally, instead of redirect them to Gmail. Here is the /var/log/qmail/current for two email: the first one is sent without problems to example.com, second one fails because it's for mydomain.com: 2012-11-15 15:04:11.551933500 new msg 262580 2012-11-15 15:04:11.551936500 info msg 262580: bytes 5604 from <[email protected]> qp 5185 uid 33 2012-11-15 15:04:11.575910500 starting delivery 316: msg 262580 to remote [email protected] 2012-11-15 15:04:11.575912500 status: local 0/10 remote 1/20 2012-11-15 15:04:12.189828500 delivery 316: success: 74.125.136.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1352991894_j49si13055539eep.9/ 2012-11-15 15:04:12.189830500 status: local 0/10 remote 0/20 2012-11-15 15:04:12.189831500 end msg 262580 2012-11-15 16:49:20.270332500 new msg 262580 2012-11-15 16:49:20.270336500 info msg 262580: bytes 2192 from <[email protected]> qp 5479 uid 33 2012-11-15 16:49:20.315125500 starting delivery 323: msg 262580 to local [email protected] 2012-11-15 16:49:20.315128500 status: local 1/10 remote 0/20 2012-11-15 16:49:20.320855500 delivery 323: failure: Sorry,_no_mailbox_here_by_that_name._(#5.1.1)/ 2012-11-15 16:49:20.320858500 status: local 0/10 remote 0/20 2012-11-15 16:49:20.372911500 bounce msg 262580 qp 5484 2012-11-15 16:49:20.372914500 end msg 262580 As you can see, it says: Sorry,_no_mailbox_here_by_that_name I can't say he's wrong :) How to solve this? How to let Google App Gmail manage incoming email for mydomain.com for messages sent by mydomain.com qmail server?

    Read the article

  • Solaris Mysql Failure and Unable to Restart

    - by Iscariot
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' when typing mysql as root it says mysql: not found. The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it.

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Varnish with multiple sites/boxes

    - by jerhinesmith
    Is it possible for Varnish to redirect traffic to different IPs based on the url? For example, is the following setup feasible (and if so, what would the VCL look like): *.example.com points to Varnish IP address When a request is made to foo.example.com, varnish checks the cache and sends the request to Server1's IP address on a cache miss. When a request is made to bar.example.com, varnish checks the cache and sends the request to Server2's IP address on a cache miss. foo and bar are (for the most part) completely unrelated sites. They use the engine, but have different content and their own distinct database. Since there previously was no penalty for doing so (other than cost) we split them up into two separate boxes so that a ton of traffic to foo won't have a negative impact on visitors browsing around bar. I could set up two instances of varnish and have one serve up foo's static content and the other serve up bar's, but as there doesn't seem to be much overhead to running Varnish, I think (perhaps mistakenly) that it would make more sense to go with one Varnish server that redirects the traffic to the appropriate box on a cache miss.

    Read the article

  • Permission Denied for FTP User

    - by Alasdair
    I have an FTP user whose default is /root/ftpuser This user can login fine. The user is the owner of the directory & the directory is even set to 777 permissions. But the user can't upload anything, the display is: Status: Connecting to xx.xxx.xxx.xx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 2 of 50 allowed. Response: 220-Local time is now 05:12. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER ftpuser Response: 331 User ftpuser OK. Password required Command: PASS ********* Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Starting upload of test.html Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: MKD / Response: 550 Can't create directory: Permission denied Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: SIZE /btn.png Response: 550 Can't check for file existence Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (66,232,106,33,52,218) Command: STOR /test.html Response: 553 Can't open that file: Permission denied Error: Critical file transfer error It's a Linux CentOS 6 server. Any ideas?

    Read the article

  • I would like to edit the layout of my keyboard just a bit - what's the best way?

    - by Codemonkey
    I'm using an Apple keyboard which has some annoyances compared to other keyboards. Namely, the Alt_L and Super_L keys are swapped, and the bar and less keys are swapped ("|" and "<"). I've written an Xmodmap file to swap the keys back: keycode 49 = less greater less greater onehalf threequarters keycode 64 = Super_L NoSymbol Super_L keycode 94 = bar section bar section brokenbar paragraph keycode 108 = Super_R NoSymbol Super_R keycode 133 = Alt_L Meta_L Alt_L Meta_L keycode 134 = Alt_R Meta_R Alt_R Meta_R I did this by identifying the keys using xev and the default modmap xmodmap -pke and swapping the keycodes. xev now identifies all my keys as correct, which is awesome! I can also use the correct keys to type the bar and less than symbols. (I followed this answer on askubuntu: http://askubuntu.com/q/24916/52719) But it seems the change isn't very deep. For instance, the Super key is now broken in the Compiz Settings Manager. No shortcuts involving the Super key works (but the Alt key does). Also the settings dialog for Gnome Do doesn't heed the changes in xmodmap, and I can't open the Gnome Do window anymore if I use any of the remapped keys. So to summarize, everything broke. I would like a deeper way of telling Ubuntu (or any other Linux distro for that matter) which keys are which on the keyboard. Is there a way to edit the Keyboard Layout directly? I'm using the Norwegian Bokmål keyboard layout. Does it reside in a file somewhere I could edit? Any comments, previous experiences or relevant stray thoughts would be greatly appreciated - Thanks

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • XNA Notes 009

    - by George Clingerman
    This past week the MVPs (myself included) were on Microsoft campus for the MVP summit. So I apologize in advance if you did something cool or heard of something cool happening with XNA and XBLIGs and it’s not in my notes. I did my best to stay on top of things, but honestly this community is fast and furious with what it’s doing and creating. I really can’t keep up and that’s fantastic! But here’s what I *did* notice while I was there on Microsoft Campus (and I did make sure to point out to the XNA team several of these very cool happenings while I had their ears). Time Critical XNA News: The XNA team wants you to know that Dream Build Play registration is now open! http://blogs.msdn.com/b/xna/archive/2011/02/28/registration-now-open-for-dream-build-play-2011-challenge.aspx Join the XNA-UK create on March 24, 2011 at the Microsoft Tech Days Conference http://xna-uk.net/blogs/darkgenesis/archive/2011/02/27/join-the-xna-uk-crew-at-the-microsoft-tech-days-conference-on-24th-march-2011.aspx XNA Team: Shawn Hargreaves shares one of the coolest things that’s happened in the XNA community http://blogs.msdn.com/b/shawnhar/archive/2011/03/02/xbox-indies-pivot-view.aspx Nick Gravelyn continues his unique marketing/work prioritization strategy as he tries to get to 5,000 Pixel Man users before he makes Pixel Man 2 (and he’s almost there!) http://nickgravelyn.com/pixelman2/ XNA MVPs: A lot of the XNA MVPs were at the Microsoft MVP Summit 2011. Due to NDAs, most things can’t be shared, but I’m sure if you’re curious you could ask them about the general vibe and feeling they got from the team and the future of XNA/XBLIG and more. Catalin Zima and team release the free WP7 game Chickens Can Dream http://twitter.com/CatalinZima/statuses/41174062923390976 http://www.amusedsloth.com/2011/02/chickens-can-dream-is-live/ Charles Humphrey (NemoKrad) posts his March talk source and PowerPoint http://xna-uk.net/blogs/randomchaos/archive/2011/03/04/march-2011-talk-post-processing-framework.aspx XNA Developers: Michael B. McLaughlin posts about ANTS Memory Profile and creates a CheckMemoryAllocationGame sample (extremely useful if you’re looking to see how much memory some operation allocates!) http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx http://geekswithblogs.net/mikebmcl/archive/2011/03/01/checkmemoryallocationgame-sample.aspx Andy Schatz (2009 IGF winner for Monaco) talking XNA at GDC 2011 http://www.gamasutra.com/view/news/33313/GDC_2011_Andy_Schatz_Ill_Make_My_Last_Game_When_I_Die.php Xbox LIVE Indie Games (XBLIG): Clover: A Curious Tale by BinaryTweed is coming as a Deal of the Week during St. Patricks Day http://majornelson.com/archive/2011/03/03/comingsoontothexboxlivemarketplacemarchthird.aspx Ska Studios away at GDC but still very post happy as always http://www.ska-studios.com/2011/03/02/swamped-picture-pack/ http://www.ska-studios.com/2011/02/28/the-february-showcase/ http://www.ska-studios.com/2011/02/25/good-morning-gato-51-smelling-the-roses/ Just Press Start interviews Matthew Mikuszewski of Darkwind Media about Blocks Indie http://justpressstart.net/?p=516 Gamergeddon Xbox Indie Game Round Up - February 27th http://www.gamergeddon.com/2011/02/27/xbox-indie-game-round-up-february-27th/ http://www.gamergeddon.com/category/xbox-360/indie-games/ GameMarx does a round up of all the Xbox Live Indie Game podcasts that are currently available http://www.gamemarx.com/news/2011/02/27/xbox-live-indie-game-podcasts.aspx GameMarx episode 11 http://www.gamemarx.com/video/the-show/26/ep-11-february-25-2011.aspx In perhaps what I feel is the most exciting news I’ve heard all week, Michael C. Neel (ViNull of GameMarx fame) re-launch XboxIndies.com! http://www.gamemarx.com/news/2011/03/01/the-relaunch-of-xboxindies-com.aspx http://xboxindies.com/ Armless Octopus shares a little of what they heard from Luke Schneider of Radiangames during his GDC 2011 talk http://www.armlessoctopus.com/2011/03/02/gdc-2011-luke-schneider-offers-insight-into-radiangames-success/ VVGindiecast Episode 1 with guests Derek Strickland(Mr_Deeke), Kris Steele(Kriswd40 from FunInfused Games) and Dave Voyles(From armlessoctopus.com) http://vvgtv.com/2011/02/25/vvgindiecast-xblig-podcast/ If you’re doing Xbox LIVE Indie Game Reviews get in touch with XboxIndies.com to get into their aggregated feed http://forums.create.msdn.com/forums/p/76931/467189.aspx#467189 B.U.T.T.O.N and Flotilla represented XNA very well at the Independent Games Festival (are there any more games entered that were created using XNA? Stand up and be heard!) http://www.igf.com/php-bin/entry2011.php?id=374 Armless Ocotopus interview at GDC 2011 with Soulcaster creator Ian Stocker http://www.armlessoctopus.com/2011/03/04/gdc-2011-interview-with-soulcaster-creator-ian-stocker/ MommysBestGames gets a nod in the DarkBasic newsletter where it features the Explosionade Editor (just do a search for Explosionade to get to the interesting bits!) http://www.thegamecreators.com/pages/newsletters/newsletter_issue_98.html You may be hearing the cries of FortressCraft (coming soon to XBLIG) being so wrong for stealing the idea from MineCraft. But did you know the the game MineCraft started from was an XNA game called Infiniminer? XNA is getting it’s fingers into EVERYTHING! http://www.minecraftwiki.net/wiki/Infiniminer XNA Development: TorqueX is NOT dead thanks to the tremendous efforts of the XNA Community working on the CEV (special thanks to @PinoEire for all his hard work on making that happen!) http://www.garagegames.com/community/blogs/view/20878 http://torquecev.com/ Dave Henry has posted XNA 3.x adding platformer start kit to the network game state management on his new site http://twitter.com/#!/mort8088/status/43407715908853760 http://mort8088.com/2011/03/03/xna-3-x-adding-platformer-starter-kit-to-network-game-state-management/ Mark Bamford releases XNAViewer 4.0, great for running XNA games inside of a Windows Form (for building level editors, etc.) http://twitter.com/#!/xzodia04/status/43466830412660736 http://xnaviewer.codeplex.com/ Unit testing an XNA game with Resharper and NUnit http://smnbss.wordpress.com/2011/02/28/planetx-unit-testing-an-xna-game-with-resharper-and-nunit-wp7-xbox-xna/ XNA for Silverlight developers: Part 5 - Input (touch + gestures) http://ht.ly/1bxwUE Mike McLaughlin shares a link he stumbled across for those looking to understand vector and matrix math http://twitter.com/#!/mikebmcl/status/42587074725036032 http://chortle.ccsu.edu/VectorLessons/vectorIndex.html DigitalRune Resources Pooling in XNA (Part 1) http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/84/DigitalRune-Helper-Library-Resource-Pooling-in-XNA-Part-1.aspx JohnK “bobthecbuilder” released a new SunBurn Update that lowers the requirements for Windows Games http://twitter.com/#!/bobthecbuilder/status/43457306578522112 http://www.synapsegaming.com/blogs/johnk/archive/2011/03/03/sunburn-update-windows-redistributable.aspx Quick update on the Indiefreaks Game Framework v0.4 development status http://indiefreaks.com/2011/03/04/quick-update-on-igf-v0-4-development/

    Read the article

  • MVC 2 Editor Template for Radio Buttons

    - by Steve Michelotti
    A while back I blogged about how to create an HTML Helper to produce a radio button list.  In that post, my HTML helper was “wrapping” the FluentHtml library from MvcContrib to produce the following html output (given an IEnumerable list containing the items “Foo” and “Bar”): 1: <div> 2: <input id="Name_Foo" name="Name" type="radio" value="Foo" /><label for="Name_Foo" id="Name_Foo_Label">Foo</label> 3: <input id="Name_Bar" name="Name" type="radio" value="Bar" /><label for="Name_Bar" id="Name_Bar_Label">Bar</label> 4: </div> With the release of MVC 2, we now have editor templates we can use that rely on metadata to allow us to customize our views appropriately.  For example, for the radio buttons above, we want the “id” attribute to be differentiated and unique and we want the “name” attribute to be the same across radio buttons so the buttons will be grouped together and so model binding will work appropriately. We also want the “for” attribute in the <label> element being set to correctly point to the id of the corresponding radio button.  The default behavior of the RadioButtonFor() method that comes OOTB with MVC produces the same value for the “id” and “name” attributes so this isn’t exactly what I want out the the box if I’m trying to produce the HTML mark up above. If we use an EditorTemplate, the first gotcha that we run into is that, by default, the templates just work on your view model’s property. But in this case, we *also* was the list of items to populate all the radio buttons. It turns out that the EditorFor() methods do give you a way to pass in additional data. There is an overload of the EditorFor() method where the last parameter allows you to pass an anonymous object for “extra” data that you can use in your view – it gets put on the view data dictionary: 1: <%: Html.EditorFor(m => m.Name, "RadioButtonList", new { selectList = new SelectList(new[] { "Foo", "Bar" }) })%> Now we can create a file called RadioButtonList.ascx that looks like this: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <% 3: var list = this.ViewData["selectList"] as SelectList; 4: %> 5: <div> 6: <% foreach (var item in list) { 7: var radioId = ViewData.TemplateInfo.GetFullHtmlFieldId(item.Value); 8: var checkedAttr = item.Selected ? "checked=\"checked\"" : string.Empty; 9: %> 10: <input type="radio" id="<%: radioId %>" name="<%: ViewData.TemplateInfo.HtmlFieldPrefix %>" value="<%: item.Value %>" <%: checkedAttr %>/> 11: <label for="<%: radioId %>"><%: item.Text %></label> 12: <% } %> 13: </div> There are several things to note about the code above. First, you can see in line #3, it’s getting the SelectList out of the view data dictionary. Then on line #7 it uses the GetFullHtmlFieldId() method from the TemplateInfo class to ensure we get unique IDs. We pass the Value to this method so that it will produce IDs like “Name_Foo” and “Name_Bar” rather than just “Name” which is our property name. However, for the “name” attribute (on line #10) we can just use the normal HtmlFieldPrefix property so that we ensure all radio buttons have the same name which corresponds to the view model’s property name. We also get to leverage the fact the a SelectListItem has a Boolean Selected property so we can set the checkedAttr variable on line #8 and use it on line #10. Finally, it’s trivial to set the correct “for” attribute for the <label> on line #11 since we already produced that value. Because the TemplateInfo class provides all the metadata for our view, we’re able to produce this view that is widely re-usable across our application. In fact, we can create a couple HTML helpers to better encapsulate this call and make it more user friendly: 1: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, params string[] items) 2: { 3: return htmlHelper.RadioButtonList(expression, new SelectList(items)); 4: } 5:   6: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> items) 7: { 8: var func = expression.Compile(); 9: var result = func(htmlHelper.ViewData.Model); 10: var list = new SelectList(items, "Value", "Text", result); 11: return htmlHelper.EditorFor(expression, "RadioButtonList", new { selectList = list }); 12: } This allows us to simply the call like this: 1: <%: Html.RadioButtonList(m => m.Name, "Foo", "Bar" ) %> In that example, the values for the radio button are hard-coded and being passed in directly. But if you had a view model that contained a property for the collection of items you could call the second overload like this: 1: <%: Html.RadioButtonList(m => m.Name, Model.FooBarList ) %> The Editor templates introduced in MVC 2 definitely allow for much more flexible views/editors than previously available. By knowing about the features you have available to you with the TemplateInfo class, you can take these concepts and customize your editors with extreme flexibility and re-usability.

    Read the article

  • System locking up with suspicious messages about hard disk

    - by Chris Conway
    My system has started behaving strangely, intermittently locking up. I see messages like the following in syslog: Nov 18 22:22:00 claypool kernel: [ 3428.078156] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.078163] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.078167] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.078182] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.078184] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.078188] ata3.00: status: { DRDY } Nov 18 22:22:00 claypool kernel: [ 3428.080887] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.080890] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.080893] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.080905] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.080906] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.080910] ata3.00: status: { DRDY } And then this: Nov 18 23:13:56 claypool kernel: [ 6544.000798] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 18 23:13:56 claypool kernel: [ 6544.000804] ata1.00: failed command: FLUSH CACHE EXT Nov 18 23:13:56 claypool kernel: [ 6544.000814] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 23:13:56 claypool kernel: [ 6544.000815] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Nov 18 23:13:56 claypool kernel: [ 6544.000819] ata1.00: status: { DRDY } Nov 18 23:13:56 claypool kernel: [ 6544.000825] ata1: hard resetting link Nov 18 23:14:01 claypool kernel: [ 6549.360324] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:06 claypool kernel: [ 6554.008091] ata1: COMRESET failed (errno=-16) Nov 18 23:14:06 claypool kernel: [ 6554.008103] ata1: hard resetting link Nov 18 23:14:11 claypool kernel: [ 6559.372246] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:16 claypool kernel: [ 6564.020228] ata1: COMRESET failed (errno=-16) Nov 18 23:14:16 claypool kernel: [ 6564.020235] ata1: hard resetting link Nov 18 23:14:21 claypool kernel: [ 6569.380109] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:31 claypool kernel: [ 6579.460243] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 18 23:14:31 claypool kernel: [ 6579.486595] ata1.00: configured for UDMA/133 Nov 18 23:14:31 claypool kernel: [ 6579.486601] ata1.00: retrying FLUSH 0xea Emask 0x4 Nov 18 23:14:31 claypool kernel: [ 6579.486939] ata1.00: device reported invalid CHS sector 0 Nov 18 23:14:31 claypool kernel: [ 6579.486952] ata1: EH complete Nov 18 23:17:01 claypool CRON[3910]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Nov 18 23:17:01 claypool CRON[3908]: (CRON) error (grandchild #3910 failed with exit status 1) Nov 18 23:17:01 claypool postfix/sendmail[3925]: fatal: open /etc/postfix/main.cf: No such file or directory Nov 18 23:17:01 claypool CRON[3908]: (root) MAIL (mailed 1 byte of output; but got status 0x004b, #012) Nov 18 23:39:01 claypool CRON[4200]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) There are no messages marked after 23:39. When I next tried to use the machine, it would not return from the screensaver (blank screen), nor switch to another terminal, and I had to hard reboot it. [UPDATE] The output of smartctl is here. I had trouble getting this, because / is being mounted read-only (?!), which prevents most applications from running. Also, it may not be related, but I have the following worrying messages in dmesg: [ 10.084596] k8temp 0000:00:18.3: Temperature readouts might be wrong - check erratum #141 [ 10.098477] i2c i2c-0: nForce2 SMBus adapter at 0x600 [ 10.098483] ACPI: resource nForce2_smbus [io 0x0700-0x073f] conflicts with ACPI region SM00 [??? 0x00000700-0x0000073f flags 0x30] [ 10.098486] ACPI: This conflict may cause random problems and system instability [ 10.098487] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 10.098509] i2c i2c-1: nForce2 SMBus adapter at 0x700 [ 10.112570] Linux agpgart interface v0.103 [ 10.155329] atk: Resources not safely usable due to acpi_enforce_resources kernel parameter [ 10.161506] it87: Found IT8712F chip at 0x290, revision 8 [ 10.161517] it87: VID is disabled (pins used for GPIO) [ 10.161527] it87: in3 is VCC (+5V) [ 10.161528] it87: in7 is VCCH (+5V Stand-By) [ 10.161560] ACPI: resource it87 [io 0x0295-0x0296] conflicts with ACPI region ECRE [??? 0x00000290-0x000002af flags 0x45] [ 10.161562] ACPI: This conflict may cause random problems and system instability [ 10.161564] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [UPDATE 2] I swapped in a new SATA cable, per Phil's suggestion. The current output of smartctl is here, if it helps. [UPDATE 3] I don't think the cable fixed it. The system hasn't locked up yet, but my media player crashed a few minutes ago and I have the following in the syslog: Nov 20 16:07:17 claypool kernel: [ 2294.400033] ata1: link is slow to respond, please be patient (ready=0) Nov 20 16:07:47 claypool kernel: [ 2324.084581] ata1: COMRESET failed (errno=-16) Nov 20 16:07:47 claypool kernel: [ 2324.084588] ata1: limiting SATA link speed to 1.5 Gbps Nov 20 16:07:47 claypool kernel: [ 2324.084592] ata1: hard resetting link I get the following response from smartctl: $ sudo smartctl -a /dev/sda [sudo] password for chris: sudo: Can't open /var/lib/sudo/chris/0: Read-only file system smartctl 5.40 2010-03-16 r3077 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: /0:0:0:0 Version: scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46 >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

    Read the article

  • ASP.NET Web API - Screencast series Part 3: Delete and Update

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we'll start to modify data on the server using DELETE and POST methods. So far we've been looking at GET requests, and the difference between standard browsing in a web browser and navigating an HTTP API isn't quite as clear. Delete is where the difference becomes more obvious. With a "traditional" web page, to delete something'd probably have a form that POSTs a request back to a controller that needs to know that it's really supposed to be deleting something even though POST was really designed to create things, so it does the work and then returns some HTML back to the client that says whether or not the delete succeeded. There's a good amount of plumbing involved in communicating between client and server. That gets a lot easier when we just work with the standard HTTP DELETE verb. Here's how the server side code works: public Comment DeleteComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); repository.Delete(id); return comment; } If you look back at the GET /api/comments code in Part 2, you'll see that they start the exact same because the use cases are kind of similar - we're looking up an item by id and either displaying it or deleting it. So the only difference is that this method deletes the comment once it finds it. We don't need to do anything special to handle cases where the id isn't found, as the same HTTP 404 handling works fine here, too. Pretty much all "traditional" browsing uses just two HTTP verbs: GET and POST, so you might not be all that used to DELETE requests and think they're hard. Not so! Here's the jQuery method that calls the /api/comments with the DELETE verb: $(function() { $("a.delete").live('click', function () { var id = $(this).data('comment-id'); $.ajax({ url: "/api/comments/" + id, type: 'DELETE', cache: false, statusCode: { 200: function(data) { viewModel.comments.remove( function(comment) { return comment.ID == data.ID; } ); } } }); return false; }); }); So in order to use the DELETE verb instead of GET, we're just using $.ajax() and setting the type to DELETE. Not hard. But what's that statusCode business? Well, an HTTP status code of 200 is an OK response. Unless our Web API method sets another status (such as by throwing the Not Found exception we saw earlier), the default response status code is HTTP 200 - OK. That makes the jQuery code pretty simple - it calls the Delete action, and if it gets back an HTTP 200, the server-side delete was successful so the comment can be deleted. Adding a new comment uses the POST verb. It starts out looking like an MVC controller action, using model binding to get the new comment from JSON data into a c# model object to add to repository, but there are some interesting differences. public HttpResponseMessage<Comment> PostComment(Comment comment) { comment = repository.Add(comment); var response = new HttpResponseMessage<Comment>(comment, HttpStatusCode.Created); response.Headers.Location = new Uri(Request.RequestUri, "/api/comments/" + comment.ID.ToString()); return response; } First off, the POST method is returning an HttpResponseMessage<Comment>. In the GET methods earlier, we were just returning a JSON payload with an HTTP 200 OK, so we could just return the  model object and Web API would wrap it up in an HttpResponseMessage with that HTTP 200 for us (much as ASP.NET MVC controller actions can return strings, and they'll be automatically wrapped in a ContentResult). When we're creating a new comment, though, we want to follow standard REST practices and return the URL that points to the newly created comment in the Location header, and we can do that by explicitly creating that HttpResposeMessage and then setting the header information. And here's a key point - by using HTTP standard status codes and headers, our response payload doesn't need to explain any context - the client can see from the status code that the POST succeeded, the location header tells it where to get it, and all it needs in the JSON payload is the actual content. Note: This is a simplified sample. Among other things, you'll need to consider security and authorization in your Web API's, and especially in methods that allow creating or deleting data. We'll look at authorization in Part 6. As for security, you'll want to consider things like mass assignment if binding directly to model objects, etc. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • problem occur during installation of moses scripts

    - by lenny99
    we got error when compile moses-script. process of it as follows: minakshi@minakshi-Vostro-3500:~/Desktop/monu/moses/scripts$ make release # Compile the parts make all make[1]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts' # Building memscore may fail e.g. if boost is not available. # We ignore this because traditional scoring will still work and memscore isn't used by default. cd training/memscore ; \ ./configure && make \ || ( echo "WARNING: Building memscore failed."; \ echo 'training/memscore/memscore' >> ../../release-exclude ) checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking whether the C++ compiler works... yes checking for C++ compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for gcc... gcc checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for boostlib >= 1.31.0... yes checking for cos in -lm... yes checking for gzopen in -lz... yes checking for cblas_dgemm in -lgslcblas... no checking for gsl_blas_dgemm in -lgsl... no checking how to run the C++ preprocessor... g++ -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking n_gram.h usability... no checking n_gram.h presence... no checking for n_gram.h... no checking for size_t... yes checking for ptrdiff_t... yes configure: creating ./config.status config.status: creating Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make all-am make[3]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[3]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' touch release-exclude # No files excluded by default pwd=`pwd`; \ for subdir in cmert-0.5 phrase-extract symal mbr lexical-reordering; do \ make -C training/$subdir || exit 1; \ echo "### Compiler $subdir"; \ cd $pwd; \ done make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' ### Compiler cmert-0.5 make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' ### Compiler phrase-extract make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' ### Compiler symal make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' ### Compiler mbr make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' ### Compiler lexical-reordering ## All files that need compilation were compiled make[1]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts' /bin/sh: ./check-dependencies.pl: not found make: *** [release] Error 127 We don't know why this error occurs? check-dependencies.pl file existed in scripts folder ...

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >