Search Results

Search found 6679 results on 268 pages for 'echo'.

Page 219/268 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; UPDATE Guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4 Menu Item 2 .............. $ 7.5 Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer. echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation.

    Read the article

  • Plymouth and GRUB do not show at all

    - by WarriorIng64
    I am using Ubuntu 11.04 64-bit as my only OS on my desktop computer, which used to only run Ubuntu 10.04 LTS until I had the time to upgrade it with a fresh install. It uses integrated NVIDIA graphics (listed as a GeForce 6150SE nForce 430 by the NVIDIA X Server Settings utility) with the current proprietary driver as provided by the Additional Drivers utility, and has a VGA connection to a 1680x1050 Acer monitor. I used to get the (ugly-looking version of) Plymouth graphical boot screen while under 10.04. It didn't look that great, but I was fine with it. Now, it doesn't show on 11.04 at all during boot (I just get an error message in a moving gray box from the monitor saying "Input Not Supported"), and only rarely it will show on shutdown, all garbled up. I could not get GRUB to show during boot while holding down Shift, either (same error message), but pressing Enter while it should be up starts the system normally. A picture of the error message I was getting: Once fully booted, the system still shows the login screen and desktop just fine. Any information on how to troubleshoot this would be appreciated. If there's any hardware-specific stuff I forgot to include here, let me know the relevant commands to run in a comment below. Things that I've tried: Running plymouth in a framebuffer: no effect Booting with nomodeset as my grub boot: option no effect Booting with nomodeset and plymouth in a framebuffer: no effect other than Plymouth showing during shutdown only Following the Softpedia instructions for fixing Plymouth's resolution: Problem mostly solved, except logo does not show in Plymouth during boot, and both grub and Plymouth are slightly off-center #4 above, but with nomodeset removed as a grub boot option: same effect as #4 #5 above, but with vt.handoff=7 added as a grub boot option: same effect as #4 I have added the current contents of /etc/default/grub as requested in the comments: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1280x1024 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" CURRENT STATUS: I forgot to uncomment one line as per "things that I've tried" #4, so I took care of that. I can now see GRUB during startup when I hold Shift and a normal-looking Plymouth during shutdown...but Plymouth during boot is now just a solid purple screen. In each case, it's displayed a little off-center to the left, with a thin black bar running down the right side of the monitor. The error pictured above no longer shows. I'd say this problem is about 2/3 solved now. UPDATE: After Natty started freezing up on me, I decided to dual-boot with Oneiric, which unfortunately shows the same problems. Rather than trying all these workarounds though, I decided to do what I should have done from the start and file a pair of bug reports. LAST UPDATE: Bug 850908 has been confirmed as a legitimate nouveaufb bug. I have overwritten my 11.04 partition with 12.04 LTS, and I can confirm at this time that the issue is present there, as well. I will now flag this question to be closed, yet I hope it was helpful for anyone who experienced similar issues; if you are still having the same problem as me, please go there and mark yourself as affected. Thanks!

    Read the article

  • Ubuntu 12.04 Install - Black Screen - No Grub - Have run Boot Repair Disk

    - by Pat
    Day 4 of my purgatory. History: Had problems with live CD at first, had to set the "nomodeset" option ... and then it worked fine. Installed Ubuntu "Alongside" Windows XP from live CD (NOT wubi) Upon reboot after installation, I get the BIOS ... and then a black screen. If I hit shift after the BIOS screen I get text that says "loading GRUB ..", but then no GRUB ... just a black screen. What I have tried to do: Total re-installation ... 3 times now. Also tried with wubi, same black screen. Have gone back to the normal (non-wubi) install. After installation, I tried re-booting the live cd ... and trying to change GRUB file using: sudo gedit /etc/default/grub ... to "nomodeset" and "timeout=10" ... but won't let me save my changes because I'm using the live cd "in memory" system and don't have permissions to the disks (I think). I tried logging in ... but it won't let me. I then read many posts on this site. I'm stuck. This morning, I ran the "boot repair disk". Results here: http://paste.ubuntu.com/1132333/ What I think is wrong: Since I can get the live CD to run (perfectly) with the "nomodeset" option, I think all I need to do is get to GRUB to change that ... but I can't get to GRUB. Appreciate any advice. Pat Day 5 ... I downloaded "Super Grub 2 Disk" from: http://www.supergrubdisk.org/super-grub2-disk/ This looks promising. I can boot the disk and it brings me to a GRUB program that allows me to: 1) Boot to Windows ... which works 2) Boot to Ubuntu ... which does NOT work When I choose boot to Ubuntu, I get lines across the screen which is an obvious video card problem. Likely because I need to set the "nomodeset" option. So, I attempted to use super grub2 to edit the grub file ... but it is TOTALLY different than the Ubuntu grub file ... and I don't know where to put the "nomodeset" option. Still stuck ... The bottom line is that: 1) I need to edit /etc/default/grub on sda(1) ... which is my boot drive ... to add the "nomodeset" option 2) To do that I need to get into grub ... but, I can't. Holding down shift just echo's "loading grub .." and then takes me to a black screen 3) I can boot to the live CD by setting nomodeset .... but I cannot access the hard disk as root ... I can't save my changes! Can anyone tell me how to login as root for the filesystem from the live CD ... so I can edit the grub file on the HARD DISK ... and then run update-grub??

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

  • PPPD Is Locking the Modem and Not Releasing It

    - by Skid
    Got an issue with PPPD on one of our system, we have a PC that is used to talk to remote sites via a dial up connection, the modem can both connect out to the sites and the sites also dial back in. Currently I'm having an issue where some times a site ether dials in or we dial out, and it connects, but then blocks the modem and throws and error to kern.log. Aug 26 14:23:57 TM-SCADA kernel: [191233.503745] INFO: task pppd:8142 blocked for more than 120 seconds. Aug 26 14:23:57 TM-SCADA kernel: [191233.503750] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 26 14:23:57 TM-SCADA kernel: [191233.503753] pppd D ffffffff8180cb40 0 8142 1 0x00000000 Aug 26 14:23:57 TM-SCADA kernel: [191233.503759] ffff8800ac1f5dc8 0000000000000086 ffff8800ac1f5fd8 00000000000137c0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503765] ffff8800ac1f4010 00000000000137c0 00000000000137c0 00000000000137c0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503770] ffff8800ac1f5fd8 00000000000137c0 ffffffff81c13020 ffff880135df5b80 Aug 26 14:23:57 TM-SCADA kernel: [191233.503775] Call Trace: Aug 26 14:23:57 TM-SCADA kernel: [191233.503784] [<ffffffff8166ba29>] schedule+0x29/0x70 Aug 26 14:23:57 TM-SCADA kernel: [191233.503790] [<ffffffff813db005>] tty_ldisc_ref_wait+0x65/0xb0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503796] [<ffffffff813f3061>] ? uart_ioctl+0xd1/0x1c0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503801] [<ffffffff81076960>] ? wake_up_bit+0x40/0x40 Aug 26 14:23:57 TM-SCADA kernel: [191233.503806] [<ffffffff813d3fa0>] tty_ioctl+0x2c0/0x9a0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503810] [<ffffffff811d0549>] ? fcntl_setlk+0x69/0x200 Aug 26 14:23:57 TM-SCADA kernel: [191233.503815] [<ffffffff81195f79>] do_vfs_ioctl+0x99/0x330 Aug 26 14:23:57 TM-SCADA kernel: [191233.503820] [<ffffffff81195212>] ? do_fcntl+0x232/0x410 Aug 26 14:23:57 TM-SCADA kernel: [191233.503823] [<ffffffff811962b1>] sys_ioctl+0xa1/0xb0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503829] [<ffffffff81674e69>] system_call_fastpath+0x16/0x1b The syslog trace stops at "Serial connection established". Aug 28 06:00:03 TM-SCADA pppd[10358]: pppd 2.4.5 started by root, uid 0 Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (NO CARRIER) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (NO DIALTONE) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (ERROR) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (NO ANSWER) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (BUSY) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (Username/Password Incorrect) Aug 28 06:00:04 TM-SCADA chat[10360]: send (atz^M) Aug 28 06:00:04 TM-SCADA chat[10360]: expect (OK) Aug 28 06:00:04 TM-SCADA chat[10360]: atz^M^M Aug 28 06:00:04 TM-SCADA chat[10360]: OK Aug 28 06:00:04 TM-SCADA chat[10360]: -- got it Aug 28 06:00:04 TM-SCADA chat[10360]: send (atx0^M) Aug 28 06:00:04 TM-SCADA chat[10360]: expect (OK) Aug 28 06:00:04 TM-SCADA chat[10360]: ^M Aug 28 06:00:04 TM-SCADA chat[10360]: atx0^M^M Aug 28 06:00:04 TM-SCADA chat[10360]: OK Aug 28 06:00:04 TM-SCADA chat[10360]: -- got it Aug 28 06:00:04 TM-SCADA chat[10360]: send (atdt0123456789^M) Aug 28 06:00:04 TM-SCADA chat[10360]: expect (CONNECT/ARQ) Aug 28 06:00:04 TM-SCADA chat[10360]: ^M Aug 28 06:00:30 TM-SCADA chat[10360]: atdt0123456789^M^M Aug 28 06:00:30 TM-SCADA chat[10360]: CONNECT/ARQ Aug 28 06:00:30 TM-SCADA chat[10360]: -- got it Aug 28 06:00:30 TM-SCADA pppd[10358]: Serial connection established. I've only found two ways to release the modem in this condition, the first is to turn the modem off and on again, the second is to delete the serial lock file, and then SIGKILL pppd. Now I could write into our software to do the latter if the modem is locked, but I would rather stop it from locking in the first place if at all possible. The reason I put this issue in the askubuntu is because we used to use OpenSuse and never had an issue with it, admittedly that was version 11.2 or earlier so its still and old kernel, but I figured I would ask here first anyway. Any suggestions of places to look would be appreciated.

    Read the article

  • LSI SAS 9240-8i on Ubuntu 12.04 Hangs on Modprobe

    - by Francois Stark
    I used the LSI 9240-8i card on a smaller Intel motherboard with no problems in Ubuntu, with ZFS. However, we rebuilt the server to allow for more disks, using the ASROCK X79 Extreme 11 motherboard. It has 7 PCIe slots, and a LSI 2008 on-board. At first I thought the LSI 9240, when plugged in to PCIe, clashed with the on-board LSI 2008. Every time I plugged in the LSI 9240, modprobe would hang. Then I completely disabled the on-board LSI 2008, and the problem persisted. Last night it booted perfectly ONCE - all LSI cards and connected disks visible... However, all subsequent reboots failed. Both LSI cards' bios scans appear and they both see the disks connected to them, but Ubuntu modprobe hangs. Some selected dmesg lines, with both LSI cards enabled: [ 190.752100] megasas: [ 0]waiting for 1 commands to complete [ 195.772071] megasas: [ 5]waiting for 1 commands to complete [ 200.792079] megasas: [10]waiting for 1 commands to complete [ 205.812078] megasas: [15]waiting for 1 commands to complete [ 210.832037] megasas: [20]waiting for 1 commands to complete [ 215.852077] megasas: [25]waiting for 1 commands to complete [ 220.872072] megasas: [30]waiting for 1 commands to complete [ 225.892078] megasas: [35]waiting for 1 commands to complete [ 230.912086] megasas: [40]waiting for 1 commands to complete [ 235.932075] megasas: [45]waiting for 1 commands to complete [ 240.306157] usb 2-1.5: USB disconnect, device number 7 [ 240.952076] megasas: [50]waiting for 1 commands to complete [ 240.960034] INFO: task modprobe:233 blocked for more than 120 seconds. [ 240.960055] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 240.960067] modprobe D ffffffff81806200 0 233 146 0x00000004 [ 240.960075] ffff880806ae3b48 0000000000000086 ffff880806ae3ae8 ffffffff8101adf3 [ 240.960083] ffff880806ae3fd8 ffff880806ae3fd8 ffff880806ae3fd8 0000000000013780 [ 240.960090] ffffffff81c0d020 ffff880806acae00 ffff880806ae3b58 ffff880808961720 [ 240.960096] Call Trace: [ 240.960107] [<ffffffff8101adf3>] ? native_sched_clock+0x13/0x80 [ 240.960116] [<ffffffff816579cf>] schedule+0x3f/0x60 [ 240.960137] [<ffffffffa00093f5>] megasas_issue_blocked_cmd+0x75/0xb0 [megaraid_sas] [ 240.960144] [<ffffffff8108aa50>] ? add_wait_queue+0x60/0x60 [ 240.960154] [<ffffffffa000a6c9>] megasas_get_seq_num+0xd9/0x260 [megaraid_sas] [ 240.960164] [<ffffffffa000ab31>] megasas_start_aen+0x31/0x60 [megaraid_sas] [ 240.960174] [<ffffffffa00136f1>] megasas_probe_one+0x69a/0x81c [megaraid_sas] [ 240.960182] [<ffffffff813345bc>] local_pci_probe+0x5c/0xd0 [ 240.960189] [<ffffffff81335e89>] __pci_device_probe+0xf9/0x100 [ 240.960197] [<ffffffff8130ce6a>] ? kobject_get+0x1a/0x30 [ 240.960205] [<ffffffff81335eca>] pci_device_probe+0x3a/0x60 [ 240.960212] [<ffffffff813f5278>] really_probe+0x68/0x190 [ 240.960217] [<ffffffff813f5505>] driver_probe_device+0x45/0x70 [ 240.960223] [<ffffffff813f55db>] __driver_attach+0xab/0xb0 [ 240.960227] [<ffffffff813f5530>] ? driver_probe_device+0x70/0x70 [ 240.960233] [<ffffffff813f5530>] ? driver_probe_device+0x70/0x70 [ 240.960237] [<ffffffff813f436c>] bus_for_each_dev+0x5c/0x90 [ 240.960243] [<ffffffff813f503e>] driver_attach+0x1e/0x20 [ 240.960248] [<ffffffff813f4c90>] bus_add_driver+0x1a0/0x270 [ 240.960255] [<ffffffffa001e000>] ? 0xffffffffa001dfff [ 240.960260] [<ffffffff813f5b46>] driver_register+0x76/0x140 [ 240.960266] [<ffffffffa001e000>] ? 0xffffffffa001dfff [ 240.960271] [<ffffffff81335b66>] __pci_register_driver+0x56/0xd0 [ 240.960277] [<ffffffffa001e000>] ? 0xffffffffa001dfff [ 240.960286] [<ffffffffa001e09e>] megasas_init+0x9e/0x1000 [megaraid_sas] [ 240.960294] [<ffffffff81002040>] do_one_initcall+0x40/0x180 [ 240.960301] [<ffffffff810a82fe>] sys_init_module+0xbe/0x230 [ 240.960307] [<ffffffff81661ec2>] system_call_fastpath+0x16/0x1b [ 240.960314] INFO: task scsi_scan_7:349 blocked for more than 120 seconds.

    Read the article

  • Microsoft Ergonomic 7000 keyboard + mouse lag

    - by user115210
    I recently bought a new Microsoft Ergonomic 7000 keyboard. I started to use it with my Ubuntu 12.04 and it lags all the time. I try to be more specific: Even on a minor CPU usage the mouse lags. By minor I mean firefox loading a webpage, or opening an application like conky, gnome-terminal etc. When higher CPU usage occurs the keyboard is lagging too, but by this I mean it misses my hits, so what I type won't appear later. What I tried so far (and did not work)? Disable autosuspend (echo -1 to sys/bus/usb.../autosuspend) and at the same place set level to "on". I have tried several video drivers: Vesa, radeon, newest catalyst (and catalyst beta too) When my keyboard and/or mouse lags I tried an other USB keyboard which works perfectly and the same for the mouse. I tried the keyboard and mouse on a different computer with Linux (Ubuntu, Arch, OpenSuse) too, the same problem appears but not on Windows. I tried to replace the battery sets, and to change channel on the dongle. And also tried to use the dongle from other USB ports. On the same time I am able to use any other wireless mouse. I changed the XkbModel to "microsoft7000" but it did not solve anything. About the hardware: AMD A8 3870K - Radeon HD6550D 8 GB of memory 4 GB of swap (which is almost never used) Here are my PC's details: lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 045e:071d Microsoft Corp. Bus 005 Device 002: ID 0461:4ea7 Primax Electronics, Ltd lspci: 00:00.0 Host bridge: Advanced Micro Devices [AMD] Family 12h Processor Root Complex 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI BeaverCreek [Radeon HD 6550D] 00:11.0 SATA controller: Advanced Micro Devices [AMD] Hudson SATA Controller [IDE mode] (rev 40) 00:12.0 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:12.2 USB controller: Advanced Micro Devices [AMD] Hudson USB EHCI Controller (rev 11) 00:13.0 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:13.2 USB controller: Advanced Micro Devices [AMD] Hudson USB EHCI Controller (rev 11) 00:14.0 SMBus: Advanced Micro Devices [AMD] Hudson SMBus Controller (rev 13) 00:14.1 IDE interface: Advanced Micro Devices [AMD] Hudson IDE Controller 00:14.2 Audio device: Advanced Micro Devices [AMD] Hudson Azalia Controller (rev 01) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] Hudson LPC Bridge (rev 11) 00:14.4 PCI bridge: Advanced Micro Devices [AMD] Hudson PCI Bridge (rev 40) 00:14.5 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:15.0 PCI bridge: Advanced Micro Devices [AMD] Device 43a0 00:15.1 PCI bridge: Advanced Micro Devices [AMD] Device 43a1 00:16.0 USB controller: Advanced Micro Devices [AMD] Hudson USB OHCI Controller (rev 11) 00:16.2 USB controller: Advanced Micro Devices [AMD] Hudson USB EHCI Controller (rev 11) 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 0 (rev 43) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 3 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 4 00:18.5 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 6 00:18.6 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 5 00:18.7 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 7 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) dmesg | tail -n 150: http://pastebin.com/sGUAAiUe cat /var/log/Xorg.0.log: http://pastebin.com/fny7ZkN4 Note: The Icon7 Twister Evolution is the replacement mouse to use.

    Read the article

  • Kernel oops on Linux running in VirtualBox breaks some IO-related functionality on the server

    - by Kristoffer E
    We are having problems with CentOS release 6.3 running in VirtualBox on Windows 7 machines. The symptoms are the following: Everything works as normal for several hours, even days. Then something happens which breaks the system. What we still can do after this something happens: Access the web server Use existing SSH sessions to run top and free What does not work: Starting new SSH sessions (hangs after username and password is entered) Running ls in existing SSH sessions (hangs) SSI includes from our web servers that fetch data from remote machines probably more What we see on the server when this something happens is the following: Load average go from basically nothing to around 3 CPU usage is still low (5%) Disk activity is low (running iostat) Plenty of memory available Plenty of disk space available In /var/log/messages we get the following: Jun 14 01:10:48 devvm kernel: e1000 0000:00:03.0: eth0: Detected Tx Unit Hang Jun 14 01:10:48 devvm kernel: Tx Queue <0> Jun 14 01:10:48 devvm kernel: TDH <2e> Jun 14 01:10:48 devvm kernel: TDT <30> Jun 14 01:10:48 devvm kernel: next_to_use <30> Jun 14 01:10:48 devvm kernel: next_to_clean <2e> Jun 14 01:10:48 devvm kernel: buffer_info[next_to_clean] Jun 14 01:10:48 devvm kernel: time_stamp <1038284db> Jun 14 01:10:48 devvm kernel: next_to_watch <2f> Jun 14 01:10:48 devvm kernel: jiffies <103828b42> Jun 14 01:10:48 devvm kernel: next_to_watch.status <0> Jun 14 01:10:50 devvm kernel: e1000 0000:00:03.0: eth0: Detected Tx Unit Hang Jun 14 01:10:50 devvm kernel: Tx Queue <0> Jun 14 01:10:50 devvm kernel: TDH <2e> Jun 14 01:10:50 devvm kernel: TDT <30> Jun 14 01:10:50 devvm kernel: next_to_use <30> Jun 14 01:10:50 devvm kernel: next_to_clean <2e> Jun 14 01:10:50 devvm kernel: buffer_info[next_to_clean] Jun 14 01:10:50 devvm kernel: time_stamp <1038284db> Jun 14 01:10:50 devvm kernel: next_to_watch <2f> Jun 14 01:10:50 devvm kernel: jiffies <103829312> Jun 14 01:10:50 devvm kernel: next_to_watch.status <0> Jun 14 01:10:52 devvm kernel: ------------[ cut here ]------------ Jun 14 01:10:52 devvm kernel: WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x26d/0x280() (Not tainted) Jun 14 01:10:52 devvm kernel: Hardware name: VirtualBox Jun 14 01:10:52 devvm kernel: NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Jun 14 01:10:52 devvm kernel: Modules linked in: vboxsf(U) ipv6 ppdev parport_pc parport microcode sg vboxguest(U) i2c_piix4 i2c_core e1000 snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc pcnet32 mii ext4 mbcache jbd2 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Jun 14 01:10:52 devvm kernel: Pid: 0, comm: swapper Not tainted 2.6.32-279.el6.x86_64 #1 Jun 14 01:10:52 devvm kernel: Call Trace: Jun 14 01:10:52 devvm kernel: <IRQ> [<ffffffff8106b747>] ? warn_slowpath_common+0x87/0xc0 Jun 14 01:10:52 devvm kernel: [<ffffffff8106b836>] ? warn_slowpath_fmt+0x46/0x50 Jun 14 01:10:52 devvm kernel: [<ffffffff814595fd>] ? dev_watchdog+0x26d/0x280 Jun 14 01:10:52 devvm kernel: [<ffffffff81099138>] ? sched_clock_cpu+0xb8/0x110 Jun 14 01:10:52 devvm kernel: [<ffffffff81459390>] ? dev_watchdog+0x0/0x280 Jun 14 01:10:52 devvm kernel: [<ffffffff8107e897>] ? run_timer_softirq+0x197/0x340 Jun 14 01:10:52 devvm kernel: [<ffffffff810a21c0>] ? tick_sched_timer+0x0/0xc0 Jun 14 01:10:52 devvm kernel: [<ffffffff8102b40d>] ? lapic_next_event+0x1d/0x30 Jun 14 01:10:52 devvm kernel: [<ffffffff81073ec1>] ? __do_softirq+0xc1/0x1e0 Jun 14 01:10:52 devvm kernel: [<ffffffff81096c50>] ? hrtimer_interrupt+0x140/0x250 Jun 14 01:10:52 devvm kernel: [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30 Jun 14 01:10:52 devvm kernel: [<ffffffff8100de85>] ? do_softirq+0x65/0xa0 Jun 14 01:10:52 devvm kernel: [<ffffffff81073ca5>] ? irq_exit+0x85/0x90 Jun 14 01:10:52 devvm kernel: [<ffffffff81505be0>] ? smp_apic_timer_interrupt+0x70/0x9b Jun 14 01:10:52 devvm kernel: [<ffffffff8100bc13>] ? apic_timer_interrupt+0x13/0x20 Jun 14 01:10:52 devvm kernel: <EOI> [<ffffffff810387cb>] ? native_safe_halt+0xb/0x10 Jun 14 01:10:52 devvm kernel: [<ffffffff810149cd>] ? default_idle+0x4d/0xb0 Jun 14 01:10:52 devvm kernel: [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110 Jun 14 01:10:52 devvm kernel: [<ffffffff814e433a>] ? rest_init+0x7a/0x80 Jun 14 01:10:52 devvm kernel: [<ffffffff81c21f7b>] ? start_kernel+0x424/0x430 Jun 14 01:10:52 devvm kernel: [<ffffffff81c2133a>] ? x86_64_start_reservations+0x125/0x129 Jun 14 01:10:52 devvm kernel: [<ffffffff81c21438>] ? x86_64_start_kernel+0xfa/0x109 Jun 14 01:10:52 devvm kernel: ---[ end trace 2c7bb984812cf120 ]--- Jun 14 01:10:52 devvm kernel: e1000 0000:00:03.0: eth0: Reset adapter Jun 14 01:10:53 devvm abrtd: Directory 'oops-2013-06-14-01:10:53-1537-0' creation detected Jun 14 01:10:53 devvm abrt-dump-oops: Reported 1 kernel oopses to Abrt Jun 14 01:10:53 devvm abrtd: Can't open file '/var/spool/abrt/oops-2013-06-14-01:10:53-1537-0/uid': No such file or directory Jun 14 01:10:55 devvm kernel: Bridge firewalling registered After this we see for a while, every two minutes: Jun 14 01:14:22 devvm kernel: INFO: task events/0:19 blocked for more than 120 seconds. Jun 14 01:14:22 devvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 14 01:14:22 devvm kernel: events/0 D 0000000000000000 0 19 2 0x00000000 Jun 14 01:14:22 devvm kernel: ffff880116c4fb90 0000000000000046 00000000ffffffff 0000000000000008 Jun 14 01:14:22 devvm kernel: 0000000000016680 0000000000016680 ffff880028210400 0000000000016680 Jun 14 01:14:22 devvm kernel: ffff880116c4daf8 ffff880116c4ffd8 000000000000fb88 ffff880116c4daf8 Jun 14 01:14:22 devvm kernel: Call Trace: Jun 14 01:14:22 devvm kernel: [<ffffffff8105b483>] ? perf_event_task_sched_out+0x33/0x80 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe6a5>] schedule_timeout+0x215/0x2e0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100975d>] ? __switch_to+0x13d/0x320 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe323>] wait_for_common+0x123/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff81060250>] ? default_wake_function+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe43d>] wait_for_completion+0x1d/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d093>] __cancel_work_timer+0x1b3/0x1e0 Jun 14 01:14:22 devvm kernel: [<ffffffff8108cbe0>] ? wq_barrier_func+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d0f0>] cancel_work_sync+0x10/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffffa01c5ca5>] e1000_down_and_stop+0x25/0x50 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cb695>] e1000_down+0x155/0x200 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cbcb0>] ? e1000_reset_task+0x0/0xe0 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cbd1e>] e1000_reset_task+0x6e/0xe0 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffff8108c760>] worker_thread+0x170/0x2a0 Jun 14 01:14:22 devvm kernel: [<ffffffff810920d0>] ? autoremove_wake_function+0x0/0x40 Jun 14 01:14:22 devvm kernel: [<ffffffff8108c5f0>] ? worker_thread+0x0/0x2a0 Jun 14 01:14:22 devvm kernel: [<ffffffff81091d66>] kthread+0x96/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100c14a>] child_rip+0xa/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff81091cd0>] ? kthread+0x0/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 Jun 14 01:14:22 devvm kernel: INFO: task parted:8069 blocked for more than 120 seconds. Jun 14 01:14:22 devvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 14 01:14:22 devvm kernel: parted D 0000000000000003 0 8069 7994 0x00000080 Jun 14 01:14:22 devvm kernel: ffff8800908b3bb8 0000000000000082 0000000000000000 ffff88010ab50080 Jun 14 01:14:22 devvm kernel: ffff880116c7d500 0000000000000001 0000000000000000 0000000000000000 Jun 14 01:14:22 devvm kernel: ffff88010ab50638 ffff8800908b3fd8 000000000000fb88 ffff88010ab50638 Jun 14 01:14:22 devvm kernel: Call Trace: Jun 14 01:14:22 devvm kernel: [<ffffffff814fe6a5>] schedule_timeout+0x215/0x2e0 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe323>] wait_for_common+0x123/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff81060250>] ? default_wake_function+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8112b6d0>] ? lru_add_drain_per_cpu+0x0/0x10 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe43d>] wait_for_completion+0x1d/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d177>] flush_work+0x77/0xc0 Jun 14 01:14:22 devvm kernel: [<ffffffff8108cbe0>] ? wq_barrier_func+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d2f3>] schedule_on_each_cpu+0x133/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff811ad440>] ? invalidate_bh_lru+0x0/0x50 Jun 14 01:14:22 devvm kernel: [<ffffffff8112ae35>] lru_add_drain_all+0x15/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff811adf6a>] invalidate_bdev+0x2a/0x50 Jun 14 01:14:22 devvm kernel: [<ffffffff8125e9a4>] blkdev_ioctl+0x3b4/0x6e0 Jun 14 01:14:22 devvm kernel: [<ffffffff811b381c>] block_ioctl+0x3c/0x40 Jun 14 01:14:22 devvm kernel: [<ffffffff8118dec2>] vfs_ioctl+0x22/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8118e064>] do_vfs_ioctl+0x84/0x580 Jun 14 01:14:22 devvm kernel: [<ffffffff8118e5e1>] sys_ioctl+0x81/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b In /var/spool/abrt/oops-2013-06-14-01:10:53-1537-0 we can see the following information: In backtrace: WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x26d/0x280() (Not tainted) Hardware name: VirtualBox NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Modules linked in: vboxsf(U) ipv6 ppdev parport_pc parport microcode sg vboxguest(U) i2c_piix4 i2c_core e1000 snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc pcnet32 mii ext4 mbcache jbd2 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Pid: 0, comm: swapper Not tainted 2.6.32-279.el6.x86_64 #1 Call Trace: <IRQ> [<ffffffff8106b747>] ? warn_slowpath_common+0x87/0xc0 [<ffffffff8106b836>] ? warn_slowpath_fmt+0x46/0x50 [<ffffffff814595fd>] ? dev_watchdog+0x26d/0x280 [<ffffffff81099138>] ? sched_clock_cpu+0xb8/0x110 [<ffffffff81459390>] ? dev_watchdog+0x0/0x280 [<ffffffff8107e897>] ? run_timer_softirq+0x197/0x340 [<ffffffff810a21c0>] ? tick_sched_timer+0x0/0xc0 [<ffffffff8102b40d>] ? lapic_next_event+0x1d/0x30 [<ffffffff81073ec1>] ? __do_softirq+0xc1/0x1e0 [<ffffffff81096c50>] ? hrtimer_interrupt+0x140/0x250 [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30 [<ffffffff8100de85>] ? do_softirq+0x65/0xa0 [<ffffffff81073ca5>] ? irq_exit+0x85/0x90 [<ffffffff81505be0>] ? smp_apic_timer_interrupt+0x70/0x9b [<ffffffff8100bc13>] ? apic_timer_interrupt+0x13/0x20 <EOI> [<ffffffff810387cb>] ? native_safe_halt+0xb/0x10 [<ffffffff810149cd>] ? default_idle+0x4d/0xb0 [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110 [<ffffffff814e433a>] ? rest_init+0x7a/0x80 [<ffffffff81c21f7b>] ? start_kernel+0x424/0x430 [<ffffffff81c2133a>] ? x86_64_start_reservations+0x125/0x129 [<ffffffff81c21438>] ? x86_64_start_kernel+0xfa/0x109 In cmdline: ro root=/dev/mapper/vg_01-lv_root rd_NO_LUKS LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=sv-latin1 rd_NO_MD SYSFONT=latarcyrheb-sun16 rd_LVM_LV=vg_01/lv_root crashkernel=129M@0M rhgb quiet rd_LVM_LV=vg_01/lv_swap rd_NO_DM rhgb quie Additional information: # uname -a Linux devvm 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release CentOS release 6.3 (Final) VirtualBox version 4.2.6. Any insight in how we can proceed with troubleshooting this is appreciated. If you need more information, just let me know.

    Read the article

  • Accessing home directory hangs

    - by Jeff
    Occasionally my laptop will hang when trying to access my home directory. The only fix so far is to reboot and then it goes away for a week. /var/log/kern.log has the following error: Nov 21 13:54:39 Laptop1 kernel: [231480.428107] INFO: task ls:10104 blocked for more than 120 seconds. Nov 21 13:54:39 Laptop1 kernel: [231480.428114] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Nov 21 13:54:39 Laptop1 kernel: [231480.428120] ls D f5dbf6a0 0 10104 9964 0x00000004 Nov 21 13:54:39 Laptop1 kernel: [231480.428130] f3edbd40 00000086 00000001 f5dbf6a0 00000000 00000001 c175dfe0 c1868ec0 Nov 21 13:54:39 Laptop1 kernel: [231480.428145] c1868ec0 054eadaf 0000d250 f5005ec0 ee940cc0 f3e5b300 f3edbcf8 c10e8bfa Nov 21 13:54:39 Laptop1 kernel: [231480.428159] f3edbd10 f3edbd10 f3edbd10 c102b505 fffba7e8 089fa000 f3edbd38 c10fb528 Nov 21 13:54:39 Laptop1 kernel: [231480.428173] Call Trace: Nov 21 13:54:39 Laptop1 kernel: [231480.428189] [<c10e8bfa>] ? lru_cache_add_lru+0x2a/0x50 Nov 21 13:54:39 Laptop1 kernel: [231480.428199] [<c102b505>] ? __kunmap_atomic+0x75/0xa0 Nov 21 13:54:39 Laptop1 kernel: [231480.428207] [<c10fb528>] ? do_anonymous_page+0x1f8/0x280 Nov 21 13:54:39 Laptop1 kernel: [231480.428218] [<c152b656>] __mutex_lock_slowpath+0xc6/0x120 Nov 21 13:54:39 Laptop1 kernel: [231480.428225] [<c152b304>] mutex_lock+0x24/0x40 Nov 21 13:54:39 Laptop1 kernel: [231480.428246] [<f83ab87c>] cifs_reconnect_tcon+0x13c/0x2a0 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428255] [<c152fa00>] ? vmalloc_fault+0xee/0xee Nov 21 13:54:39 Laptop1 kernel: [231480.428262] [<c152fc2f>] ? do_page_fault+0x22f/0x4a0 Nov 21 13:54:39 Laptop1 kernel: [231480.428276] [<f83abe3c>] smb_init+0x2c/0x90 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428285] [<c11aa42e>] ? ext4_htree_store_dirent+0x2e/0x120 Nov 21 13:54:39 Laptop1 kernel: [231480.428301] [<f83b0941>] CIFSSMBQPathInfo+0x41/0x210 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428319] [<f83c39e4>] ? cifs_get_inode_info+0x224/0x390 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428336] [<f83c3a21>] cifs_get_inode_info+0x261/0x390 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428354] [<f83bb35d>] ? build_path_from_dentry+0xcd/0x250 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428362] [<c102b69e>] ? kmap_atomic_prot+0xde/0x100 Nov 21 13:54:39 Laptop1 kernel: [231480.428370] [<c152c4cd>] ? _raw_spin_lock+0xd/0x10 Nov 21 13:54:39 Laptop1 kernel: [231480.428388] [<f83c6378>] ? _GetXid+0x58/0x80 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428405] [<f83c4f81>] cifs_revalidate_dentry_attr+0x111/0x1a0 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428423] [<f83c50e2>] cifs_getattr+0x52/0x120 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428431] [<c112c5b2>] vfs_getattr+0x42/0x70 Nov 21 13:54:39 Laptop1 kernel: [231480.428448] [<f83c5090>] ? cifs_revalidate_dentry+0x40/0x40 [cifs] Nov 21 13:54:39 Laptop1 kernel: [231480.428455] [<c112c647>] vfs_fstatat+0x67/0x80 Nov 21 13:54:39 Laptop1 kernel: [231480.428461] [<c112c680>] vfs_lstat+0x20/0x30 Nov 21 13:54:39 Laptop1 kernel: [231480.428468] [<c112c946>] sys_lstat64+0x16/0x30 Nov 21 13:54:39 Laptop1 kernel: [231480.428475] [<c11341ed>] ? link_path_walk+0x79d/0x8a0 Nov 21 13:54:39 Laptop1 kernel: [231480.428483] [<c152c8e4>] syscall_call+0x7/0xb

    Read the article

  • Using Oracle Linux iSCSI targets with Oracle VM

    - by wim.coekaerts
    A few days ago I had written a blog entry on how to use Oracle Solaris 10 (in my case), ZFS and the iSCSI target feature in Oracle Solaris to create a set of devices exported to my Oracle VM server. Oracle Linux can do this as well and I wanted to make sure I also tried out how to do this on Oracle Linux and here are the results. When you install Oracle Linux 5 update 5 (anything newer than update 3), it comes with an rpm called scsi-target-utils. To begin your quest, should you choose to accept it :) make sure this is installed. rpm -qa |grep scsi-target If it is not installed : up2date scsi-target-utils The target utils come with a tool tgtadm which is similar to iscsitadm on Oracle Solaris. There are 2 components again on the iSCSI server side. (1) create volumes - we will use lvm with lvcreate (2) expose a target using tgtadm. My server has a simple setup. All the disks are part of a single volume group called vgroot. To export a 50Gb volume I just create a new volume : lvcreate -L 50G -nmytest1 vgroot This will show up as a new volume in /dev/mapper as /dev/mapper/vgroot-mytest1. Create as many as you want for your environment. Since I already have my blog entry about the 5 volumes, I am not going to repeat the whole thing. You can just go look at the previous blog entry. Now that we have created the volume, we need to use tgtadm to set it up : make sure the service is running : /etc/init.d/tgtd start or service tgtd start (if you want to keep it running you can do chkconfig tgtd on to start it automatically at boottime) Next you need a targetname to set everything up. My recommendation would be to install iscsi-initiator-utils . This will create an iscsi id and put it in /etc/iscsi/initiatorname.iscsi. For convenience you can do : source /etc/iscsi/initiatorname.iscsi echo $InitiatorName and from here on use $InitiatorName instead of the long complex iqn. create your target : tgtadm --lld iscsi --op new --mode target --tid 1 -T $InitiatorName to show the status : tgtadm --lld iscsi --op show --mode target add the volume previously created : tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/vgroot-mytest1 re-run status to see it's there : tgtadm --lld iscsi --op show --mode target and just like on Oracle Solaris you now have to export (bind) it : tgtadm --lld iscsi --op bind --mode target --tid 1 -I iqn.1986-03.com.sun:01:2a7526f0ffff If you want to export the lun to every iscsi initiator then replace the iqn with ALL. Of course you have to add the iqn of each iscsi initiator or client you want to connect. In the case of my 2 node Oracle VM server setup, both Oracle VM server's initiator names would have to be added. use status again to see that it has this iqn under ACL tgtadm --lld iscsi --op show --mode target You can drop the --lld iscsi if you want, or alias it. It just makes the command line more obvious as to what you are doing. Oracle VM side : Refer back to the previous blog entry for the detailed setup of my Oracle VM server volumes but the exact same commands will be used there. discover : iscsiadm --mode discovery --type sendtargets --portal login : iscsiadm --mode node --targetname iscsi targetname --portal --login get devices : /etc/init.d/iscsi restart and voila you should be in business. have fun.

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • Running a WebLogic Portal (WLP) 10.3.4 Domain as a Windows Service

    - by user647124
    To start a WLP server as a Windows service it is simplest to make your own script based on the provided standard script located at WL_HOME\server\bin\installSvc.cmd. The standard script works fine for a plain WLS domain, but lacks some classpath and options necessary for WLP.Start by making a copy of the installSvc.cmd script and naming it something specific to your domain.Next, just under SETLOCAL you will find where WL_HOME is defined. Here you will add the definitions you would normally add in a script that later calls installSvc.cmd (as per the standard documentation). set DOMAIN_NAME=gnma_test_domainset USERDOMAIN_HOME=D:\my_test_domainset SERVER_NAME=AdminServerset WLS_USER=weblogicset WLS_PW=gnmaAdmin01set PRODUCTION_MODE=trueset MEM_ARGS=-Xms512m –Xmx512mset MW_HOME=C:\Oracle\Middleware Note: I had heard of people using this approach who had issues with the length of the command line. This may be due to their use of the default domain path. In the example above, I use a shorter path.At this point, edit the DOMAIN_HOME\bin\startWebLogic.cmd and set it to echo both the classpath and the options. Then start the domain and capture the output of those echoes, then shut the domain back down. Now REM out the existing CLASSPATH definition, then use the outputs you captured earlier to set the CLASSPATH and JAVA_OPTIONS like this: REM set CLASSPATH=%WEBLOGIC_CLASSPATH%;%CLASSPATH%; C:\Oracle\Middleware\wlportal_10.3\portal\lib\security\wsrp-security-providers.jarset CLASSPATH=%MW_HOME%\patch_wls1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_wlp1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_oepe1111\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_ocm1033\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\JROCKI~1.1-3\lib\tools.jar;%WL_HOME%\server\lib\weblogic_sp.jar;%WL_HOME%\server\lib\weblogic.jar;%MW_HOME%\modules\features\weblogic.server.modules_10.3.4.0.jar;%WL_HOME%\server\lib\webservices.jar;%MW_HOME%\modules\ORGAPA~1.1/lib/ant-all.jar;%MW_HOME%\modules\NETSFA~1.0_1/lib/ant-contrib.jar;%WL_HOME%\common\derby\lib\derbyclient.jar;%WL_HOME%\server\lib\xqrl.jar;%WL_HOME%\server\lib\xquery.jar;%WL_HOME%\server\lib\binxml.jarset JAVA_OPTIONS= -Xverify:none -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dplatform.home=%WL_HOME% -Dwls.home=%WL_HOME%\server -Dweblogic.home=%WL_HOME%\server -Dweblogic.wsee.bind.suppressDeployErrorMessage=true -Dweblogic.wsee.skip.async.response=true -Dweblogic.management.discover=true -Dwlw.iterativeDev=true -Dwlw.testConsole=true -Dwlw.logErrorsToConsole=true -Dweblogic.ext.dirs=%MW_HOME%\patch_wls1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_wlp1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_oepe1111\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_ocm1033\profiles\default\sysext_manifest_classpath;%MW_HOME%\wlportal_10.3\p13n\lib\system;%MW_HOME%\wlportal_10.3\light-portal\lib\system;%MW_HOME%\wlportal_10.3\portal\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\lib\system;%MW_HOME%\wlportal_10.3\analytics\lib\system;%MW_HOME%\wlportal_10.3\apps\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\deprecated\lib\system;%MW_HOME%\wlportal_10.3\content-mgmt\lib\system -Dweblogic.alternateTypesDirectory=%MW_HOME%\wlportal_10.3\portal\lib\securityAnd that's it. Looks really simple, but it took me quite some time to gather all the necessary pieces in order to make it work. Hopefully you find this before you went through half as much research.The example here uses a domain with only the Admin server and no managed servers. For a variety of reasons I only want the Admin server to be run as a service. The standard documentation along with the example above should allow you to expand this to include managed servers should you feel the need.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • How do I configure WakeOnUSB properly?

    - by wishi
    How do I configure Wake-On-USB properly on a 10.04 or 10.10 Ubuntu (2.6.36 and higher if needed)? (Wake-on-USB is when the computer is asleep and for example a USB Keyboard event wakes up the machine!) The notebook is an Acer Aspire Timeline X 1830T. I don't know in which way the Linux Kernel supports the controllers. There are different ways to approach this, for example /proc/acpi/wakeup... or UDEV... or something with HAL? /proc/acpi/wakeup shows every device in S4, but I need S3. Device S-state Status Sysfs node P0P2 S4 *disabled PEGP S4 *disabled P0P1 S0 *disabled pci:0000:00:1e.0 EHC1 S4 *disabled pci:0000:00:1d.0 USB1 S4 *enabled USB2 S4 *disabled USB3 S4 *disabled USB4 S4 *disabled EHC2 S4 *disabled pci:0000:00:1a.0 USB5 S4 *disabled USB6 S4 *disabled USB7 S4 *disabled HDEF S0 *disabled pci:0000:00:1b.0 RP01 S5 *disabled pci:0000:00:1c.0 PXSX S5 *disabled pci:0000:01:00.0 RP02 S0 *disabled pci:0000:00:1c.1 PXSX S5 *disabled pci:0000:02:00.0 RP03 S0 *disabled PXSX S5 *disabled RP04 S0 *disabled PXSX S5 *disabled RP05 S0 *disabled PXSX S5 *disabled RP07 S0 *disabled PXSX S5 *disabled RP08 S0 *disabled PXSX S5 *disabled GLAN S0 *disabled PEG3 S4 *disabled PEG5 S4 *disabled PEG6 S4 *disabled SLPB S3 *enabled S4, which is Suspend-To-Disk afaik... doesn't seem to work either if I echo USB1 into the wakeup table. It just sets an S4 flag. can I get the USB ports in S3? I want to make the machine wakeup from Suspend-To-Ram (S3, ACPI standard) in case a key on my external keyboard is pressed. It only wakes up if a key on the internal Laptop keyboard is pressed... from Suspend To Ram. It seems if I plug in a USB mouse, that the USB port isn't even powered. I have no BIOS option to change this. Further specific information regarding the device: usb-devices T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 13 Spd=1.5 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=04d9 ProdID=1603 Rev=03.10 S: Manufacturer= S: Product=USB Keyboard C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=01 Prot=01 Driver=usbhid I: If#= 1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid root@underwater-laptop:/# lsusb [...] Bus 001 Device 013: ID 04d9:1603 Holtek Semiconductor, Inc. Bus 001 Device 004: ID 0bda:0138 Realtek Semiconductor Corp. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub [...] If this doesn't work I have to properly explain why :( - but I think it is very hard to research this kernel internal. Any hints for good information here? I hope it's possible... I'm just looking for any solution. edit: this, waking up on USB, works on Windows! Thanks a lot, Marius

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Refactoring existing PHP Project. I need some advices

    - by b0x
    i have a small SAS ERP that was written some years ago using PHP. At that time, it didn't used any framework, but the code isn't a mess as i will explain more detailed in the following lines. Nowadays, the project grow and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such Laravel. Although I'd love trying Laravel, I’m a small business and i don't have time/money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers I must separate the "view", because: I want to release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) Release different versions to fit mobile/tablet. Make different types of this product, seeling packages as if it were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing rotines, to quality assurance. My actual structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysqli on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s fies outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But i want to write a manual of our rules. Something that i can give to any new programmer at my companie and he can go on. This is not totally a mess, but It could be better seeing the new practices. What could I do to separate this as MVC, to have multiple VIEW’s. Could you gimme some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc. It would be nice. Thanks!

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • Single use download script - Modification [on hold]

    - by Iulius
    I have this Single use download script! This contain 3 php files: page.php , generate.php and variables.php. Page.php Code: <?php include("variables.php"); $key = trim($_SERVER['QUERY_STRING']); $keys = file('keys/keys'); $match = false; foreach($keys as &$one) { if(rtrim($one)==$key) { $match = true; $one = ''; } } file_put_contents('keys/keys',$keys); if($match !== false) { $contenttype = CONTENT_TYPE; $filename = SUGGESTED_FILENAME; readfile(PROTECTED_DOWNLOAD); exit; } else { ?> <html> <head> <meta http-equiv="refresh" content="1; url=http://docs.google.com/"> <title>Loading, please wait ...</title> </head> <body> Loading, please wait ... </body> </html> <?php } ?> Generate.php Code: <?php include("variables.php"); $password = trim($_SERVER['QUERY_STRING']); if($password == ADMIN_PASSWORD) { $new = uniqid('key',TRUE); if(!is_dir('keys')) { mkdir('keys'); $file = fopen('keys/.htaccess','w'); fwrite($file,"Order allow,deny\nDeny from all"); fclose($file); } $file = fopen('keys/keys','a'); fwrite($file,"{$new}\n"); fclose($file); ?> <html> <head> <title>Page created</title> <style> nl { font-family: monospace } </style> </head> <body> <h1>Page key created</h1> Your new single-use page link:<br> <nl> <?php echo "http://" . $_SERVER['HTTP_HOST'] . DOWNLOAD_PATH . "?" . $new; ?></nl> </body> </html> <?php } else { header("HTTP/1.0 404 Not Found"); } ?> And the last one Variables.php Code: <? define('PROTECTED_DOWNLOAD','download.php'); define('DOWNLOAD_PATH','/.work/page.php'); define('SUGGESTED_FILENAME','download-doc.php'); define('ADMIN_PASSWORD','1234'); define('EXPIRATION_DATE', '+36 hours'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: ".date('U', strtotime(EXPIRATION_DATE))); ?> The http://www.site.com/generate.php?1234 will generate a unique link like page.php?key1234567890. This link page.php?key1234567890 will be sent by email to my user. Now how can I generate a link like this page.php?key1234567890&[email protected] ? So I think I must access the generator page like this generate.php?1234&[email protected] . P.S. This variable will be posted on the download page by "Hello, " I tried everthing to complete this, and no luck. Thanks in advance for help.

    Read the article

  • Shrinking a Linux OEL 6 virtual Box image (vdi) hosted on Windows 7

    - by AndyBaker
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Recently for a customer demonstration there was a requirement to build a virtual box image with Oracle Enterprise Manager Cloud Control 12c. This meant installing OEL Linux 6 as well as creating an 11gr2 database and Oracle Enterprise Manager Cloud Control 12c on a single virtual box. Storage was sized at 300Gb using dynamically allocated storage for the virtual box and about 10Gb was used for Linux and the initial build. After copying over all the binaries and performing all the installations the virtual box became in the region of 80Gb used size on the host operating system, however internally it only really needed around 20Gb. This meant 60Gb had been used when copying over all the binaries and although now free was not returned to the host operating system due to the growth of the virtual box storage '.vdi' file.  Once the ‘vdi’ storage had grown it is not shrunk automatically afterwards. Space is always tight on the laptop so it was desirable to shrink the virtual box back to a minimal size and here is the process that was followed. Install 'zerofree' Linux package into the OEL6 virtual box The RPM was downloaded and installed from a site similar to below; http://rpm.pbone.net/index.php3/stat/4/idpl/12548724/com/zerofree-1.0.1-5.el5.i386.rpm.html A simple internet search for ’zerofree Linux rpm’ was easy to perform and find the required rpm. Execute 'zerofree' package on the desired Linux file system To execute this package the desired file system needs to be mounted read only. The following steps outline this process. As root: # umount /u01 As root:# mount –o ro –t ext4 /u01 NOTE: The –o is options and the –t is the file system type found in the /etc/fstab. Next run zerofree against the required storage, this is located by a simple ‘df –h’ command to see the device associated with the mount. As root:# zerofree –v /dev/sda11   NOTE: This takes a while to run but the ‘-v’ option gives feedback on the process. What does Zerofree do? Zerofree’s purpose is to go through the file system and zero out any unused sectors on the volume so that the later stages can shrink the virtual box storage obtaining the free space back. When zerofree has completed the virtual box can be shutdown as the last stage is performed on the physical host where the virtual box vdi files are located. Compact the virtual box ‘.vdi’ files The final stage is to get virtual box to shrink back the storage that has been correctly flagged as free space after executing zerofree. On the physical host in this case a windows 7 laptop a DOS window was opened. At the prompt the first step is to put the virtual box binaries onto the PATH. C:\ >echo %PATH%   The above shows the current value of the PATH environment variable. C:\ >set PATH=%PATH%;c:\program files\Oracle\Virtual Box;   The above adds onto the existing path the virtual box binary location. C:\>cd c:\Users\xxxx\OEL6.1   The above changes directory to where the VDI files are located for the required virtual box machine. C:\Users\xxxxx\OEL6.1>VBoxManage.exe modifyhd zzzzzz.vdi compact  NOTE: The zzzzzz.vdi is the name of the required vdi file to shrink. Finally the above command is executed to perform the compact operation on the ‘.vdi’ file(s). This also takes a long time to complete but shrinks the VDI file back to a minimum size. In the case of the demonstration virtual box OEM12c this reduced the virtual box to 20Gb from 80Gb which was a great outcome to achieve.

    Read the article

  • PHP, javascript, single quote problems with IE when passing variable from ajax post to javascript fu

    - by Mattis
    Hi! I have been trying to get this to work for a while, and I suspect there's an easy solution that I just can't find. My head feels like jelly and I would really appreciate any help. My main page.php makes a .post() to backend.php and fetches a list of cities which it echoes in the form of: <li onclick="script('$data');">$data</li> The list is fetched and put onto the page via .html(). My problem occurs when $data has a single quote in it. backend.php passes the variable just fine to page.php but when i run html() it throws a javascript error (in IE, not FF obviously); ')' is expected IE parses the single quote and messes up the script()-call. I've been trying to rebuild the echoed string in different ways, escaping the 's on the php side and/or on the javascript side - but in vain. Do I have to review the script in total? page.php $.post("backend.php", {q: ""+str+""}, function(data) { if(data.length >0) { $('#results').html(data); } backend.php while ($row = $q->fetch()) { $city = $row['City']; // $city = addslashes($row['City']); // $city = str_replace("'","&#39;",$row['City']); echo "<li onclick=\"script('$city');\">".$city."</li>"; }

    Read the article

  • Running MSBuild fails to read SDKToolsPath

    - by Scott Mayfield
    Howdy, I'm having a bit of an issue runnning a NAnt script that used to properly build my .Net 2.0 based website, when compiling with VS2008 and it's associated tools. I've recently upgraded all the project/solution files to VS2010, and now my build fails with the following error: [exec] C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(2249,9): error MSB3086: Task could not find "sgen.exe" using the S dkToolsPath "" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed Now, I DO have prior versions (.Net 3.5) of the Windows SDK installed on the build server, and the full .Net 4.0 framework is installed, but I've not run across a .Net 4.0 specific version of the Windows SDK. After a bit of experimentation and research, I finally just setup a new environmental variable "SDKToolsPath" and pointed it to the copy of sgen.exe in my windows 6.0 sdk folder. This generated the same error, but it got me to notice that even though the SDKToolsPath environmental variable IS set (confirmed that I can "echo" it at the command line and it has the expected value), the error message seems to indicated that it's not being read (note the empty quotes). Most of the information I've found is .Net 3.5 (or earlier) specific. Not much 4.0 related out there yet. Searching for error code MSB3086 generated nothing useful either. Any idea what this might be? Scott

    Read the article

  • Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter number: number of bound variabl

    - by Thomas
    Hi, I'm working with PHP PDO and I have the following problem: Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens in /var/www/site/classes/enterprise.php on line 63 Here is my code: public function getCompaniesByCity(City $city, $options = null) { $database = Connection::getConnection(); if(empty($options)) { $statement = $database-prepare("SELECT * FROM empresas WHERE empresas.cidades_codigo = ?"); $statement-bindValue(1, $city-getId()); } else { $sql = "SELECT * FROM empresas INNER JOIN prods_empresas ON prods_empresas.empresas_codigo = empresas.codigo WHERE "; foreach($options as $option) { $sql .= 'prods_empresas.produtos_codigo = ? OR '; } $sql = substr($sql, 0, -4); $sql .= ' AND empresas.cidades_codigo = ?'; $statement = $database-prepare($sql); echo $sql; foreach($options as $i = $option) { $statement-bindValue($i + 1, $option-getId()); } $statement-bindValue(count($options), $city-getId()); } $statement-execute(); $objects = $statement-fetchAll(PDO::FETCH_OBJ); $companies = array(); if(!empty($objects)) { foreach($objects as $object) { $data = array( 'id' = $object-codigo, 'name' = $object-nome, 'link' = $object-link, 'email' = $object-email, 'details' = $object-detalhes, 'logo' = $object-logo ); $enterprise = new Enterprise($data); array_push($companies, $enterprise); } return $companies; } } Thank you very much!

    Read the article

  • sending email with gmail smtp with codeigniter email library

    - by bohemian
    class Email extends Controller { function Email() { parent::Controller(); $this->load->library('email'); } function index() { $config['protocol'] = 'smtp'; $config['smtp_host'] = 'ssl://smtp.gmail.com'; $config['smtp_port'] = '465'; $config['smtp_timeout'] = '7'; $config['smtp_user'] = '[email protected]'; $config['smtp_pass'] = '*******'; $config['charset'] = 'utf-8'; $config['newline'] = "\r\n"; $config['mailtype'] = 'text'; // or html $config['validation'] = TRUE; // bool whether to validate email or not $this->email->initialize($config); $this->email->from('[email protected]', 'myname'); $this->email->to('[email protected]'); $this->email->subject('Email Test'); $this->email->message('Testing the email class.'); $this->email->send(); echo $this->email->print_debugger(); $this->load->view('email_view'); } } I am getting this error A PHP Error was encountered Severity: Warning Message: fsockopen() [function.fsockopen]: unable to connect to ssl://smtp.gmail.com:465 (Connection timed out) Filename: libraries/Email.php Line Number: 1641 Using PORT 25/587 I got this error """A PHP Error was encountered Severity: Warning Message: fsockopen() [function.fsockopen]: SSL operation failed with code 1. OpenSSL Error messages: error:140770FC:SSL routines:func(119):reason(252) Filename: libraries/Email.php Line Number: 1641"""" I dont want to use phpmailer now. (Actually i have tried to use phpmailer,but i falied ) How to solve this problems,guys?

    Read the article

  • Title Background Wrapping

    - by laxj11
    Ok, I am pretty experienced at CSS but at this point I am at a loss. I layed out how I want the title to look like in photoshop: however, the closest I can approach it with css is: I need the black background to extend to the edges of the image and padding on the right side of the title. I hope you understand my question! thanks. here is the html: <div class="glossary_image"> <img src="<?php echo $custom_fields['image'][0]; ?>" /> <div class="title"> <h2><?php the_title(); ?></h2> </div> </div> and the css: .glossary_image { position: relative; height: 300px; width: 300px; margin-top:10px; margin-bottom: 10px; } .glossary_image .title { position: absolute; bottom: 0; left: 0; padding: 10px; } .glossary_image .title h2 { display: inline; font-family:Arial, Helvetica, sans-serif; font-size: 30px; font-weight:bold; line-height: 35px; color: #fff; text-transform:uppercase; background: #000; }

    Read the article

  • curl POST to RESTful services

    - by Sashikiran Challa
    Hello All, There are a lot of questions on Stackoverflow about curl but I could not figure out what is that I am doing what I am not supposed to. I am trying to call a RESTful service that I had written using Jersey API and am trying to POST an xml string to it and I get HTTP 415 error which is supposed to be a Media Type error. Here in my shell script call to 1st service: abc=curl http://gf...:8080/InChItoD/inchi/3dstructure?InChIstring=$inchi echo $abc (this works fine the output that it returns is given below.) Posting this xml string to second service def= curl -d $abc -H "Content-Type:text/xml" http://gf...:8080/XML2G/xml3d/gssinput I get the following error: ... ... HTTP Status 415 Status report message description.The server refused this request because the request entity is in a format not supported by the requested resource for the requested method ().Apache Tomcat/6.0.26 This is a sample of xml string I am trying to POST <?xml version="1.0"?><molecule xmlns="http://www.xml-cml.org/schema"> <atomArray> <atom id="a1" elementType="N" formalCharge="1" x3="0.997963" y3="-0.002882" z3="-0.004222"/> <atom id="a2" elementType="H" x3="2.024650" y3="-0.002674" z3="0.004172"/> <atom id="a3" elementType="H" x3="0.655444" y3="0.964985" z3="0.004172"/> <atom id="a4" elementType="H" x3="0.649003" y3="-0.496650" z3="0.825505"/> <atom id="a5" elementType="H" x3="0.662767" y3="-0.477173" z3="-0.850949"/> </atomArray> <bondArray> <bond atomRefs2="a1 a2" order="1"/> <bond atomRefs2="a1 a3" order="1"/> <bond atomRefs2="a1 a4" order="1"/> <bond atomRefs2="a1 a5" order="1"/> </bondArray></molecule> Thanks in advance

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >