Search Results

Search found 2470 results on 99 pages for 'tx ny ca'.

Page 94/99 | < Previous Page | 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • OpenVPN on Tomato and Vista - can't see my network

    - by Ian
    I followed the instructions here (http://todayguesswhat.blogspot.ca/2011/03/quick-simple-vpn-setup-guide-using.html) to set up a TCP connection to OpenVPN on my Tomato router. Used TCP because the place I usually surf at seems to have the other ports blocked. My Vista laptop is able to connect to the router but I don't appear to be getting an IP address. I'm able to access my router's admin page, but I can't see the network at home. When I browse to Whatsmyip I see my home IP. Here are the results of route print -4 when I'm just connect to the library and when I've fired up the VP connection as well: Library only: =========================================================================== Interface List 22 ...00 ff c4 a0 e7 5c ...... TAP-Win32 Adapter V9 15 ...00 23 4e 20 b3 64 ...... Atheros AR9281 Wireless Network Adapter 10 ...00 23 8b 39 ec 71 ...... Marvell Yukon 88E8040T PCI-E Fast Ethernet Controller 1 ........................... Software Loopback Interface 1 11 ...00 00 00 00 00 00 00 e0 isatap.{834A8A0A-5E2C-47D0-9673-7965DE8B5470} 14 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 20 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 18 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 19 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 23 ...00 00 00 00 00 00 00 e0 isatap.{C4A0E75C-765E-4F7D-A55C-77945779816A} 34 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Library and TCP OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.116 30 0.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 24.212.205.68 255.255.255.255 10.1.29.1 10.1.29.117 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 192.168.1.0 255.255.255.0 On-link 192.168.1.116 286 192.168.1.116 255.255.255.255 On-link 192.168.1.116 286 192.168.1.255 255.255.255.255 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.116 286 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Thanks for any advice. I looked at one of the answers but I'm not sure if it applied to me as it said that 10...* was the vpn connection, but I appear to have 10...* when I connect just to the library.

    Read the article

  • Oracle Virtual Server OEL vm fails to start - kernel panic on cpu identify

    - by Towndrunk
    I am in the process of following a guide to setup various oracle vm templates, so far I have installed OVS 2. 2 and got the OVM Manager working, imported the template for OEL5U5 and created a vm from it.. the problem comes when starting that vm. The log in the OVMM console shows the following; Update VM Status - Running Configure CPU Cap Set CPU Cap: failed:<Exception: failed:<Exception: ['xm', 'sched-credit', '-d', '32_EM11g_OVM', '-c', '0'] => Error: Domain '32_EM11g_OVM' does not exist. StackTrace: File "/opt/ovs-agent-2.3/OVSXXenVMConfig.py", line 2531, in xen_set_cpu_cap run_cmd(args=['xm', File "/opt/ovs-agent-2.3/OVSCommons.py", line 92, in run_cmd raise Exception('%s => %s' % (args, err)) The xend.log shows; [2012-11-12 16:42:01 7581] DEBUG (DevController:139) Waiting for devices vtpm [2012-11-12 16:42:01 7581] INFO (XendDomain:1180) Domain 32_EM11g_OVM (3) unpaused. [2012-11-12 16:42:03 7581] WARNING (XendDomainInfo:1907) Domain has crashed: name=32_EM11g_OVM id=3. [2012-11-12 16:42:03 7581] ERROR (XendDomainInfo:2041) VM 32_EM11g_OVM restarting too fast (Elapsed time: 11.377262 seconds). Refusing to restart to avoid loops .> [2012-11-12 16:42:03 7581] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=3 [2012-11-12 16:42:12 7581] DEBUG (XendDomainInfo:2230) Destroying device model [2012-11-12 16:42:12 7581] INFO (image:553) 32_EM11g_OVM device model terminated I have set_on_crash="preserve" in the vm.cfg and have then run xm create -c to get the console screen while booting and this is the log of what happens.. Started domain 32_EM11g_OVM (id=4) Bootdata ok (command line is ro root=LABEL=/ ) Linux version 2.6.18-194.0.0.0.3.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Mon Mar 29 18:27:00 EDT 2010 BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000180800000 (usable)> No mptable found. Built 1 zonelists. Total pages: 1574912 Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) Xen reported: 1600.008 MHz processor. Console: colour dummy device 80x25 Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) Software IO TLB disabled Memory: 6155256k/6299648k available (2514k kernel code, 135548k reserved, 1394k data, 184k init) Calibrating delay using timer specific routine.. 4006.42 BogoMIPS (lpj=8012858) Security Framework v1.0.0 initialized SELinux: Initializing. selinux_register_security: Registering secondary module capability Capability LSM initialized as secondary Mount-cache hash table entries: 256 CPU: L1 I Cache: 64K (64 bytes/line), D cache 16K (64 bytes/line) CPU: L2 Cache: 2048K (64 bytes/line) general protection fault: 0000 [1] SMP last sysfs file: CPU 0 Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.18-194.0.0.0.3.el5xen #1 RIP: e030:[ffffffff80271280] [ffffffff80271280] identify_cpu+0x210/0x494 RSP: e02b:ffffffff80643f70 EFLAGS: 00010212 RAX: 0040401000810008 RBX: 0000000000000000 RCX: 00000000c001001f RDX: 0000000000404010 RSI: 0000000000000001 RDI: 0000000000000005 RBP: ffffffff8063e980 R08: 0000000000000025 R09: ffff8800019d1000 R10: 0000000000000026 R11: ffff88000102c400 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffffffff805d2000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 Process swapper (pid: 0, threadinfo ffffffff80642000, task ffffffff804f4b80) Stack: 0000000000000000 ffffffff802d09bb ffffffff804f4b80 0000000000000000 0000000021100800 0000000000000000 0000000000000000 ffffffff8064cb00 0000000000000000 0000000000000000 Call Trace: [ffffffff802d09bb] kmem_cache_zalloc+0x62/0x80 [ffffffff8064cb00] start_kernel+0x210/0x224 [ffffffff8064c1e5] _sinittext+0x1e5/0x1eb Code: 0f 30 b8 73 00 00 00 f0 0f ab 45 08 e9 f0 00 00 00 48 89 ef RIP [ffffffff80271280] identify_cpu+0x210/0x494 RSP ffffffff80643f70 0 Kernel panic - not syncing: Fatal exception clear as mud to me. are there any other logs that will help me? I have now deployed another vm from the same template and used the default vm settings rather than adding more memory etc - I get exactly the same error.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Slow boot on Ubuntu 12.04, probable cause the network connection

    - by Ravi S Ghosh
    I have been having rather slow boot on Ubuntu 12.04. Lately, I tried to figure out the reason and it seems to be the network connection which does not get connected and requires multiple attempts. Here is part of dmesg [ 2.174349] EXT4-fs (sda2): INFO: recovery required on readonly filesystem [ 2.174352] EXT4-fs (sda2): write access will be enabled during recovery [ 2.308172] firewire_core: created device fw0: GUID 384fc00005198d58, S400 [ 2.333457] usb 7-1.2: new low-speed USB device number 3 using uhci_hcd [ 2.465896] EXT4-fs (sda2): recovery complete [ 2.466406] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 2.589440] usb 7-1.3: new low-speed USB device number 4 using uhci_hcd **[ 18.292029] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 18.458958] udevd[377]: starting version 175 [ 18.639482] Adding 4200960k swap on /dev/sda5. Priority:-1 extents:1 across:4200960k [ 19.314127] wmi: Mapper loaded [ 19.426602] r592 0000:09:01.2: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.426739] r592: driver successfully loaded [ 19.460105] input: Dell WMI hotkeys as /devices/virtual/input/input5 [ 19.493629] lp: driver loaded but no devices found [ 19.497012] cfg80211: Calling CRDA to update world regulatory domain [ 19.535523] ACPI Warning: _BQC returned an invalid level (20110623/video-480) [ 19.539457] acpi device:03: registered as cooling_device2 [ 19.539520] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/device:01/LNXVIDEO:00/input/input6 [ 19.539568] ACPI: Video Device [M86] (multi-head: yes rom: no post: no) [ 19.578060] Linux video capture interface: v2.00 [ 19.667708] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [ 19.763171] r852 0000:09:01.3: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.763258] r852: driver loaded successfully [ 19.854769] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.0/input/input7 [ 19.854864] generic-usb 0003:045E:00DD.0001: input,hidraw0: USB HID v1.11 Keyboard [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input0 [ 19.878605] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.1/input/input8 [ 19.878698] generic-usb 0003:045E:00DD.0002: input,hidraw1: USB HID v1.11 Device [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input1 [ 19.902779] input: DELL DELL USB Laser Mouse as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.3/7-1.3:1.0/input/input9 [ 19.925034] generic-usb 0003:046D:C063.0003: input,hidraw2: USB HID v1.10 Mouse [DELL DELL USB Laser Mouse] on usb-0000:00:1d.1-1.3/input0 [ 19.925057] usbcore: registered new interface driver usbhid [ 19.925059] usbhid: USB HID core driver [ 19.942362] uvcvideo: Found UVC 1.00 device Laptop_Integrated_Webcam_2M (0c45:63ea) [ 19.947004] input: Laptop_Integrated_Webcam_2M as /devices/pci0000:00/0000:00:1a.7/usb1/1-6/1-6:1.0/input/input10 [ 19.947075] usbcore: registered new interface driver uvcvideo [ 19.947077] USB Video Class driver (1.1.1) [ 20.145232] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 20.145235] Copyright(c) 2003-2011 Intel Corporation [ 20.145327] iwlwifi 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 20.145357] iwlwifi 0000:04:00.0: setting latency timer to 64 [ 20.145402] iwlwifi 0000:04:00.0: pci_resource_len = 0x00002000 [ 20.145404] iwlwifi 0000:04:00.0: pci_resource_base = ffffc90000674000 [ 20.145407] iwlwifi 0000:04:00.0: HW Revision ID = 0x0 [ 20.145531] iwlwifi 0000:04:00.0: irq 46 for MSI/MSI-X [ 20.145613] iwlwifi 0000:04:00.0: Detected Intel(R) WiFi Link 5100 AGN, REV=0x54 [ 20.145720] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 20.167535] iwlwifi 0000:04:00.0: device EEPROM VER=0x11f, CALIB=0x4 [ 20.167538] iwlwifi 0000:04:00.0: Device SKU: 0Xf0 [ 20.167567] iwlwifi 0000:04:00.0: Tunable channels: 13 802.11bg, 24 802.11a channels [ 20.172779] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [ 20.172783] Disabling lock debugging due to kernel taint [ 20.250115] [fglrx] Maximum main memory to use for locked dma buffers: 3759 MBytes. [ 20.250567] [fglrx] vendor: 1002 device: 9553 count: 1 [ 20.251256] [fglrx] ioport: bar 1, base 0x2000, size: 0x100 [ 20.251271] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 20.251277] pci 0000:01:00.0: setting latency timer to 64 [ 20.251559] [fglrx] Kernel PAT support is enabled [ 20.251578] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [ 20.310385] iwlwifi 0000:04:00.0: loaded firmware version 8.83.5.1 build 33692 [ 20.310598] Registered led device: phy0-led [ 20.310628] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.372306] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [ 20.411015] psmouse serio1: synaptics: Touchpad model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa40000/0xa0000 [ 20.454232] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input11 [ 20.545636] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.545640] cfg80211: World regulatory domain updated: [ 20.545642] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 20.545644] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545647] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545649] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545652] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545654] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.609484] type=1400 audit(1340502633.160:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=693 comm="apparmor_parser" [ 20.609494] type=1400 audit(1340502633.160:3): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=642 comm="apparmor_parser" [ 20.609843] type=1400 audit(1340502633.160:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=693 comm="apparmor_parser" [ 20.609852] type=1400 audit(1340502633.160:5): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=642 comm="apparmor_parser" [ 20.610047] type=1400 audit(1340502633.160:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=693 comm="apparmor_parser" [ 20.610060] type=1400 audit(1340502633.160:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=642 comm="apparmor_parser" [ 20.610476] type=1400 audit(1340502633.160:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=814 comm="apparmor_parser" [ 20.610829] type=1400 audit(1340502633.160:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=814 comm="apparmor_parser" [ 20.611035] type=1400 audit(1340502633.160:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=814 comm="apparmor_parser" [ 20.661912] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 [ 20.661982] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [ 20.662013] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 20.770289] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 20.770689] snd_hda_intel 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 20.770786] snd_hda_intel 0000:01:00.1: irq 48 for MSI/MSI-X [ 20.770815] snd_hda_intel 0000:01:00.1: setting latency timer to 64 [ 20.994040] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [ 20.994189] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input13 [ 21.554799] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [ 21.554802] vesafb: scrolling: redraw [ 21.554804] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [ 21.557342] vesafb: framebuffer at 0xd0000000, mapped to 0xffffc90011800000, using 3072k, total 3072k [ 21.557498] Console: switching to colour frame buffer device 128x48 [ 21.557516] fb0: VESA VGA frame buffer device [ 21.987338] EXT4-fs (sda2): re-mounted. Opts: errors=remount-ro [ 22.184693] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [ 27.362440] iwlwifi 0000:04:00.0: RF_KILL bit toggled to disable radio. [ 27.436988] init: failsafe main process (986) killed by TERM signal [ 27.970112] ppdev: user-space parallel port driver [ 28.198917] Bluetooth: Core ver 2.16 [ 28.198935] NET: Registered protocol family 31 [ 28.198937] Bluetooth: HCI device and connection manager initialized [ 28.198940] Bluetooth: HCI socket layer initialized [ 28.198941] Bluetooth: L2CAP socket layer initialized [ 28.198947] Bluetooth: SCO socket layer initialized [ 28.226135] Bluetooth: RFCOMM TTY layer initialized [ 28.226141] Bluetooth: RFCOMM socket layer initialized [ 28.226143] Bluetooth: RFCOMM ver 1.11 [ 28.445620] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 28.445623] Bluetooth: BNEP filters: protocol multicast [ 28.524578] type=1400 audit(1340502641.076:11): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=1052 comm="apparmor_parser" [ 28.525018] type=1400 audit(1340502641.076:12): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=1052 comm="apparmor_parser" [ 28.629957] type=1400 audit(1340502641.180:13): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1105 comm="apparmor_parser" [ 28.630325] type=1400 audit(1340502641.180:14): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1105 comm="apparmor_parser" [ 28.630535] type=1400 audit(1340502641.180:15): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1105 comm="apparmor_parser" [ 28.645266] type=1400 audit(1340502641.196:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1104 comm="apparmor_parser" **[ 28.751922] ADDRCONF(NETDEV_UP): wlan0: link is not ready** [ 28.753653] tg3 0000:08:00.0: irq 49 for MSI/MSI-X **[ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 28.857034] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 28.871080] type=1400 audit(1340502641.420:17): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1108 comm="apparmor_parser" [ 28.871519] type=1400 audit(1340502641.420:18): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1108 comm="apparmor_parser" [ 28.874905] type=1400 audit(1340502641.424:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1113 comm="apparmor_parser" [ 28.875354] type=1400 audit(1340502641.424:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1113 comm="apparmor_parser" [ 30.477976] tg3 0000:08:00.0: eth0: Link is up at 100 Mbps, full duplex [ 30.477979] tg3 0000:08:00.0: eth0: Flow control is on for TX and on for RX **[ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready** [ 31.110269] fglrx_pci 0000:01:00.0: irq 50 for MSI/MSI-X [ 31.110859] [fglrx] Firegl kernel thread PID: 1327 [ 31.111021] [fglrx] Firegl kernel thread PID: 1329 [ 31.111408] [fglrx] Firegl kernel thread PID: 1330 [ 31.111543] [fglrx] IRQ 50 Enabled [ 31.712938] [fglrx] Gart USWC size:1224 M. [ 31.712941] [fglrx] Gart cacheable size:486 M. [ 31.712945] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [ 31.712948] [fglrx] Reserved FB block: Unshared offset:fc2b000, size:3d5000 [ 31.712950] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [ 41.312020] eth0: no IPv6 routers present As you can see I get multiple instances of [ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready and then finally it becomes read and I get the message [ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready. I searched askubuntun, ubuntuforum, and the web but couldn't find a solution. Any help would be very much appreciated. Here is the bootchart

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • Bugzilla : No SASL mechanism found

    - by niteshsinha
    I am using Bugzilla on windows 7. I am using the unofficial Bugzilla installer. I followed the steps accordingly and gave valid credentials wherever required. I open Bugzilla and try to create a new account , but i get the following error. Software error: No SASL mechanism found at C:/Program Files/Bugzilla/perl/perl/site/lib/Authen/SASL.pm line 77 at C:/Program Files/Bugzilla/perl/perl/lib/Net/SMTP.pm line 143 i ran checksetup.pl and found that Authen::SASL and SMTP both are available on my machine. The output of checksetup.pl is as follows. * This is Bugzilla 3.6.3 on perl 5.10.1 * Running on Win7 Build 7600 Checking perl modules... Checking for CGI.pm (v3.33) ok: found v3.49 Checking for Digest-SHA (any) ok: found v5.48 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.53 Checking for DateTime-TimeZone (v0.79) ok: found v1.10 Checking for DBI (v1.41) ok: found v1.609 Checking for Template-Toolkit (v2.22) ok: found v2.22 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.861) ok: found v1.903 Checking for Email-MIME-Encodings (v1.313) ok: found v1.313 Checking for Email-MIME-Modifier (v1.442) ok: found v1.903 Checking for URI (any) ok: found v1.52 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.16.1 Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for DBD-Oracle (v1.19) not found The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v2.1) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for perl-ldap (any) ok: found v0.39 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.710.10 Checking for JSON-RPC (any) ok: found v0.95 Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.40) ok: found v3.64 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * *********************************************************************** * Note For Windows Users * *********************************************************************** * In order to install the modules listed below, you first have to run * * the following command as an Administrator: * * * * ppm repo add theory58S http://cpan.uwinnipeg.ca/PPMPackages/10xx/ * * * Then you have to do (also as an Administrator): * * * * ppm repo up theory58S * * * * Do that last command over and over until you see "theory58S" at the * * top of the displayed list. * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for MySQL (v4.1.2) ok: found v5.1.44-community-log Removing existing compiled templates... Precompiling templates...done. Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL. Press any key to continue . . . Please tell me what should i do. Please note: i am running behind a corporate proxy , SSL/TLS is not used internally but i am giving the smtpUser and smtpPass also.

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Apache sends plain-text response when accessing SSL-enabled site without HTTPS

    - by animuson
    I've never encountered something such as this before. I was attempting to simply redirect the page to the HTTPS version if it determined that HTTPS was off, but instead it's displaying an HTML page rather than actually redirecting; and even odder, it's displaying it as text/plain! The VirtualHost Declaration (Sort of): ServerAdmin [email protected] DocumentRoot "/path/to/files" ServerName example.com SSLEngine On SSLCertificateFile /etc/ssh/certify/example.com.crt SSLCertificateKeyFile /etc/ssh/certify/example.com.key SSLCertificateChainFile /etc/ssh/certify/sub.class1.server.ca.pem <Directory "/path/to/files/"> AllowOverride All Options +FollowSymLinks DirectoryIndex index.php Order allow,deny Allow from all </Directory> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule .* https://example.com:6161 [R=301] The Page Output: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://example.com:6161">here</a>.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> I've tried moving the Rewrite stuff up above the SSL stuff hoping it'd do something and nothing happens. If I view the page with via HTTPS, it displays fine like it should. It's obviously detecting that I'm trying to rewrite the path, but it's not acting. The Apache error log does not indicate anything to me that might have gone wrong. When I remove the RewriteRules: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://example.com/"><b>https://example.com/</b></a></blockquote></p> <p>Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> I get the standard "you can't do this because you're not using SSL" response, which is also provided in text/plain rather than being rendered as HTML. This would make sense, it should only work for HTTPS-enabled connections, but I still want to redirect them to the HTTPS connection when it determines that it is not enabled. Thinking I could circumvent the system: I tried adding a ErrorDocument 400 https://example.com:6161 to the config file instead of using RewriteRules, and that just gave me a new message, still no cheese. <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>302 Found</title> </head><body> <h1>Found</h1> <p>The document has moved <a href="https://example.com:6161">here</a>.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> How can I force Apache to actually redirect rather than displaying a "301" page that shows HTML in plain-text format?

    Read the article

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • Oracle 11g ??? – HM(Hang Manager)??

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} ??????????oracle 11g ???—hang ???(Hang Manager) ???????????,HM ??RAC ??????? ?????????????,??????????/?? hang???????hang???,????,??????????? ??(cycle)?????hang, ???????,???????? ?????(blocker) ?????????????????????,???????,?????blocker ????????(immediate blocker)??????(root blocker)??root blocker ?????????????? 2.1 ???????????,??????,????????????? 2.2 ????????????????????(??:??I/O),??????,????????????????,?????????,????????????? ??????????, oracle??????????? ???????????11g RAC???? hang????hang ?????????? 1.?????????????hang analyze dump ??? 2.????hang analyze dump??(?????) 3. ??????dump??,??????????hang? 4. ??????????hang??? ???,??????????????? ??1: ORACLE ??????????,????? hang analysis cache,???????hang analyze dump i?????????????????????????? ??2:oracle ?????hang analyze ??,??,HM?????RAC??????,hang analyze??????????????,??????dump ????????DIA0(?????11g????)???????3????????hang analyze dump, ?10 ???????hang analyze dump? ??3:??,????????hang analyze dump ??,??,??????????????DIA0??,???????hang ?????,??RAC???,??hang????????????????,??????????DIA0 ????master,???????????????????11g??,?????????DIA0?????HM?master?????,?????????????,?(master)DIA0 ???????????????????? ??hang???,HM????????????,?HM?????hang analyze dump(?30???????,????????)?,??????????????????(???????open chain),????????????????(??,???????????),??,???????,?????????hang??????????????,?????????????????,???????????(open chain)??????,???????????????????hang.??,????(dead lock)????,????????,???????????????????????????? Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} ??4: ???hang??????,??hang???????????????HM ??,????hang?????????????,??HM???????hang. 1. ??????????????????hang??,??:asm?????? 2. ????????????,??:TX?? 3. ???? 4. ???????????:???????“log file switch ”(?????????????????filesystem??????????HM????????,hang??????????)? ??,hang?HM???????,??HM?????????? ???HM???????,??????????????????,?????????oracle ???????,?????????crash???,HM???hang???,??????????????????"_hang_resolution_scope" ???,??????????off(???,????HM?????hang),process(??HM??????,??????????????),instance(??HM??????,??????????????????????????)? ??,????HM ????????trace ?????????? ??: _hang_resolution=TRUE ?? FALSE?????????HM????hang? _hang_resolution_scope=OFF,PORCESS?? INSTANCE?????????HM???????? _hang_detection= <number>? HM??hang?????,????30(?)? ??????????,???????????,??“Oracle 11g ??? – HM(Hang Manager)??"?

    Read the article

  • How to populate a drop down list in Spring MVC

    - by GigaPr
    Hi, would like to populate a drop down list on a jsp page i have my page that looks like <form:form method="POST" action="addRss.htm" commandName="addNewRss" cssClass="addUserForm"> <div class="floatL"> <div class="padding5"> <div class="fieldContainer"> <strong>Title:</strong>&nbsp; </div> <form:errors path="title" cssClass="error"/> <form:input path="title" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Description:</strong>&nbsp; </div> <form:errors path="description" cssClass="error"/> <form:input path="description" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Language:</strong>&nbsp; </div> <form:errors path="language" cssClass="error"/> <form:select path="language" cssClass="textArea" /> </div> </div> <div class="floatR"> <div class="padding5"> <div class="fieldContainer"> <strong>Link:</strong>&nbsp; </div> <form:errors path="link" cssClass="error"/> <form:input path="link" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Url:</strong>&nbsp; </div> <form:errors path="url" cssClass="error"/> <form:input path="url" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Url</strong>&nbsp; </div> <form:errors path="url" cssClass="error"/> <form:input path="url" cssClass="textArea" /> </div> </div> <input type="submit" class="floatR" value="Add New Rss"> </form:form> and my controller public class AddRssController extends BaseController { private static final String[] LANGUAGES = { "AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "DC", "FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC", "SD", "TN", "TX", "UT", "VA", "VT", "WA", "WV", "WI", "WY" }; public AddRssController() { setCommandClass(RSS.class); setCommandName("addNewRss"); } @Override protected Object formBackingObject(HttpServletRequest request) throws Exception { RSS rantForm = (RSS) super.formBackingObject(request); // rantForm.setVehicle(new Vehicle()); return rantForm; } @Override protected Map referenceData(HttpServletRequest request) throws Exception { Map referenceData = new HashMap(); referenceData.put("language", LANGUAGES); return referenceData; } @Override protected ModelAndView onSubmit(Object command, BindException bindException) throws Exception { RSS rss = (RSS) command; rssServiceImplementation.add(rss); return new ModelAndView(getSuccessView()); } } and my BaseController public class BaseController extends SimpleFormController implements Controller { public UserServiceImplementation userServiceImplementation; public UserServiceImplementation getUserServiceImplementation() { return userServiceImplementation; } public void setUserServiceImplementation(UserServiceImplementation userServiceImplementation) { this.userServiceImplementation = userServiceImplementation; } public RssServiceImplementation rssServiceImplementation; public RssServiceImplementation getRssServiceImplementation() { return rssServiceImplementation; } public void setRssServiceImplementation(RssServiceImplementation rssServiceImplementation) { this.rssServiceImplementation = rssServiceImplementation; } } But it doesn t work Any suggestion?

    Read the article

  • Rails, gmail: howto get plain/text from body

    - by atmorell
    Hello, I am loading am email with IMAP and parsing it with mail. This works very well, however the mail.body.decoded field contains a lot of formatting. How do I dig out the plain/txt body of the email - ignore attachements, formatting etc. It works fine if I try with an email without html. source = imap.uid_fetch(uid, ['RFC822']).first.attr['RFC822'] mail = Mail.new(source) This body content looks like this: Mail::Body:0x7f36ed468270 @epilogue="", @boundary="_004_4C49171DCB8C4540844E69DD39FDD98Ffirm_", @encoding="7bit", @raw_source="--_004_4C49171DCB8C4540844E69DD39FDD98Ffirm_\r\nContent-Type: multipart/alternative;\r\n\tboundary=\"_000_4C49171DCB8C4540844E69DD39FDD98Ffirm_\"\r\n\r\n--_000_4C49171DCB8C4540844E69DD39FDD98Ffirm_\r\nContent-Type: text/plain; charset=\"iso-8859-1\"\r\nContent-Transfer-Encoding: quoted-printable\r\n\r\ndasdsasda\r\n\r\n\r\n\r\nMed venlig hilsen / Med V=E4nlig H=E4lsning / Best Regards\r\r\nAsbj=F8rn Toke Morell. .\r\n+45 7020 0160\r\n+45 2152 0015\r\n[cid:[email protected]]\r\nhttp://www..dk\r\n\r\n\r\n--_000_4C49171DCB8C4540844E69DD39FDD98Ffirm_\r\nContent-Type: text/html; charset=\"iso-8859-1\"\r\nContent-Transfer-Encoding: quoted-printable\r\n\r\n<html>headheadbody style3D"word-wrap: break-word; -webkit-nbsp-mode:=\r\n space; -webkit-line-break: after-white-space; ">dasdsasda<br><div apple-co=\r\nntent-edited=3D"true">\r\n<span class=3D"Apple-style-span" style=3D"border-collapse: separate; color:=\r\n rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: norma=\r\nl; font-variant: normal; font-weight: normal; letter-spacing: normal; line-=\r\nheight: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transf=\r\norm: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-borde=\r\nr-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-te=\r\nxt-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-tex=\r\nt-stroke-width: 0px; "><span class=3D"Apple-style-span" style=3D"font-famil=\r\ny: Calibri, sans-serif; font-size: 15px; "><span class=3D"Apple-style-span"=\r\n style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: Helv=\r\netica; font-size: medium; font-style: normal; font-variant: normal; font-we=\r\night: normal; letter-spacing: normal; line-height: normal; orphans: 2; text=\r\n-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-sp=\r\nacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical=\r\n-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-=\r\nadjust: auto; -webkit-text-stroke-width: 0px; "><span class=3D"Apple-style-=\r\nspan" style=3D"font-family: Calibri, sans-serif; font-size: 15px; "><div st=\r\nyle=3D"margin-top: 0cm; margin-right: 0cm; margin-bottom: 0.0001pt; margin-=\r\nleft: 0cm; font-size: 11pt; font-family: Calibri, sans-serif; "><font class=\r\n=3D"Apple-style-span" color=3D"#000080" face=3D"'Times New Roman', serif" s=\r\nize=3D"3"><span class=3D"Apple-style-span" style=3D"font-size: 13px; "><br =\r\nclass=3D"Apple-interchange-newline"><br></span></font></div><div style=3D"m=\r\nargin-top: 0cm; margin-right: 0cm; margin-bottom: 0.0001pt; margin-left: 0c=\r\nm; font-size: 11pt; font-family: Calibri, sans-serif; "><font class=3D"Appl=\r\ne-style-span" color=3D"#000080" face=3D"'Times New Roman', serif" size=3D"3=\r\n"><span class=3D"Apple-style-span" style=3D"font-size: 13px; "><br></span><=\r\n/font></div><div style=3D"margin-top: 0cm; margin-right: 0cm; margin-bottom=\r\n: 0.0001pt; margin-left: 0cm; font-size: 11pt; font-family: Calibri, sans-s=\r\nerif; "><span style=3D"font-size: 10pt; font-family: 'Times New Roman', ser=\r\nif; color: navy; ">Med venlig hilsen / Med V=E4nlig H=E4lsning / Best Regar=\r\nds&nbsp;<br>firm<br>Asbj=F8rn Toke Morell... This is the ony relevant from information from the body: 'ndasdsasda\r\n\r\n\r\n\r\nMed venlig hilsen / Med V=E4nlig H=E4lsning / Best Regards\r\r\nAsbj=F8rn Toke Morell' Any ideas?

    Read the article

  • What is the reason of "Transaction context in use by another session"

    - by Shrike
    Hi, I'm looking for a description of the root of this error: "Transaction context in use by another session". I get it sometimes in one of my unittests so I can't provider repro code. But I wonder what is "by design" reason for the error. I've found this post: http://blogs.msdn.com/asiatech/archive/2009/08/10/system-transaction-may-fail-in-multiple-thread-environment.aspx and also that: http://msdn.microsoft.com/en-us/library/ff649002.aspx But I can't understand what "Multiple threads sharing the same transaction in a transaction scope will cause the following exception: 'Transaction context in use by another session.' " means. All words are understandable but not the point. I actually can share a system transaction between threads. And there is even special mechanism for this - DependentTransaction class and Transaction.DependentClone method. I'm trying to reproduce a usecase from the first post: 1. Main thread creates DTC transaction, receives DependentTransaction (created using Transaction.Current.DependentClone on the main thread 2. Child thread 1 enlists in this DTC transaction by creating a transaction scope based on the dependent transaction (passed via constructor) 3. Child thread 1 opens a connection 4. Child thread 2 enlists in DTC transaction by creating a transaction scope based on the dependent transaction (passed via constructor) 5. Child thread 2 opens a connection with such code: using System; using System.Threading; using System.Transactions; using System.Data; using System.Data.SqlClient; public class Program { private static string ConnectionString = "Initial Catalog=DB;Data Source=.;User ID=user;PWD=pwd;"; public static void Main() { int MAX = 100; for(int i =0; i< MAX;i++) { using(var ctx = new TransactionScope()) { var tx = Transaction.Current; // make the transaction distributed using (SqlConnection con1 = new SqlConnection(ConnectionString)) using (SqlConnection con2 = new SqlConnection(ConnectionString)) { con1.Open(); con2.Open(); } showSysTranStatus(); DependentTransaction dtx = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete); Thread t1 = new Thread(o => workCallback(dtx)); Thread t2 = new Thread(o => workCallback(dtx)); t1.Start(); t2.Start(); t1.Join(); t2.Join(); ctx.Complete(); } trace("root transaction completes"); } } private static void workCallback(DependentTransaction dtx) { using(var txScope1 = new TransactionScope(dtx)) { using (SqlConnection con2 = new SqlConnection(ConnectionString)) { con2.Open(); trace("connection opened"); showDbTranStatus(con2); } txScope1.Complete(); } trace("dependant tran completes"); } private static void trace(string msg) { Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " : " + msg); } private static void showSysTranStatus() { string msg; if (Transaction.Current != null) msg = Transaction.Current.TransactionInformation.DistributedIdentifier.ToString(); else msg = "no sys tran"; trace( msg ); } private static void showDbTranStatus(SqlConnection con) { var cmd = con.CreateCommand(); cmd.CommandText = "SELECT 1"; var c = cmd.ExecuteScalar(); trace("@@TRANCOUNT = " + c); } } It fails on Complete's call of root TransactionScope. But error is different: Unhandled Exception: System.Transactions.TransactionInDoubtException: The transaction is in doubt. --- pired. The timeout period elapsed prior to completion of the operation or the server is not responding. To sum up: I want to understand what "Transaction context in use by another session" means and how to reproduce it.

    Read the article

  • java - register problem

    - by Jake
    Hi! When i try to register a person with the name Eric for example, and then again registrating Eric it works. This should not happen with the code i have. Eric should not be registrated if theres already an Eric in the list. Here is my full code: import java.util.*; import se.lth.cs.pt.io.*; class Person { private String name; private String nbr; public Person (String name, String nbr) { this.name = name; this.nbr = nbr; } public String getName() { return name; } public String getNumber() { return nbr; } public String toString() { return name + " : " + nbr; } } class Register { private List<Person> personer; public Register() { personer = new ArrayList<Person>(); } // boolean remove(String name) { // } private Person findName(String name) { for (Person person : personer) { if (person.getName() == name) { return person; } } return null; } private boolean containsName(String name) { return findName(name) != null; } public boolean insert(String name, String nbr) { if (containsName(name)) { return false; } Person person = new Person(name, nbr); personer.add(person); Collections.sort(personer, new A()); return true; } //List<Person> findByPartOfName(String partOfName) { //} //List<Person> findByNumber(String nbr) { //} public List<Person> findAll() { List<Person> copy = new ArrayList<Person>(); for (Person person : personer) { copy.add(person); } return copy; } public void printList(List<Person> personer) { for (Person person : personer) { System.out.println(person.toString()); } } } class A implements Comparator < Person > { @Override public int compare(Person o1, Person o2) { if(o1.getName() != null && o2.getName() != null){ return o1.getName().compareTo(o2.getName()); } return 0; } } class TestScript { public static void main(String[] args) { new TestScript().run(); } void test(String msg, boolean status) { if (status) { System.out.println(msg + " -- ok"); } else { System.out.printf("==== FEL: %s ====\n", msg); } } void run() { Register register = new Register(); System.out.println("Vad vill du göra:"); System.out.println("1. Lägg in ny person."); System.out.println("2. Tag bort person."); System.out.println("3. Sök på del av namn."); System.out.println("4. Se vem som har givet nummer."); System.out.println("5. Skriv ut alla personer."); System.out.println("0. Avsluta."); int cmd = Keyboard.nextInt("Ange kommando (0-5): "); if (cmd == 0 ) { } else if (cmd == 1) { String name = Keyboard.nextLine("Namn: "); String nbr = Keyboard.nextLine("Nummer: "); System.out.println("\n"); String inlagd = "OK - " + name + " är nu inlagd."; String ejinlagd = name + " är redan inlagd."; test("Skapar nytt konto", register.insert(name, nbr) == true); System.out.println("\n"); } else if (cmd == 2) { } else if (cmd == 3) { } else if (cmd == 4) { } else if (cmd == 5) { System.out.println("\n"); register.printList(register.findAll()); System.out.println("\n"); } else { System.out.println("Inget giltigt kommando!"); System.out.println("\n"); } } }

    Read the article

  • Spring Security 3.1 xsd and jars mismatch issue

    - by kmansoor
    I'm Trying to migrate from spring framework 3.0.5 to 3.1 and spring-security 3.0.5 to 3.1 (not to mention hibernate 3.6 to 4.1). Using Apache IVY. I'm getting the following error trying to start Tomcat 7.23 within Eclipse Helios (among a host of others, however this is the last in the console): org.springframework.beans.factory.BeanDefinitionStoreException: Line 7 in XML document from ServletContext resource [/WEB-INF/focus-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: Document root element "beans:beans", must match DOCTYPE root "null". org.xml.sax.SAXParseException: Document root element "beans:beans", must match DOCTYPE root "null". my security config file looks like this: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.1.xsd"> Ivy.xml looks like this: <dependencies> <dependency org="org.hibernate" name="hibernate-core" rev="4.1.7.Final"/> <dependency org="org.hibernate" name="com.springsource.org.hibernate.validator" rev="4.2.0.Final" /> <dependency org="org.hibernate.javax.persistence" name="hibernate-jpa-2.0-api" rev="1.0.1.Final"/> <dependency org="org.hibernate" name="hibernate-entitymanager" rev="4.1.7.Final"/> <dependency org="org.hibernate" name="hibernate-validator" rev="4.3.0.Final"/> <dependency org="org.springframework" name="spring-context" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-web" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-tx" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-webmvc" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-test" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-core" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-web" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-config" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-taglibs" rev="3.1.2.RELEASE"/> <dependency org="net.sf.dozer" name="dozer" rev="5.3.2"/> <dependency org="org.apache.poi" name="poi" rev="3.8"/> <dependency org="commons-io" name="commons-io" rev="2.4"/> <dependency org="org.slf4j" name="slf4j-api" rev="1.6.6"/> <dependency org="org.slf4j" name="slf4j-log4j12" rev="1.6.6"/> <dependency org="org.slf4j" name="slf4j-ext" rev="1.6.6"/> <dependency org="log4j" name="log4j" rev="1.2.17"/> <dependency org="org.testng" name="testng" rev="6.8"/> <dependency org="org.dbunit" name="dbunit" rev="2.4.8"/> <dependency org="org.easymock" name="easymock" rev="3.1"/> </dependencies> I understand (hope) this error is due to a mismatch between the declared xsd and the jars on the classpath. Any pointers will be greatly appreciated.

    Read the article

  • Spring, Hibernate, Blob lazy loading

    - by Alexey Khudyakov
    Dear Sirs, I need help with lazy blob loading in Hibernate. I have in my web application these servers and frameworks: MySQL, Tomcat, Spring and Hibernate. The part of database config. <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <property name="driverClass" value="${jdbc.driverClassName}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="initialPoolSize"> <value>${jdbc.initialPoolSize}</value> </property> <property name="minPoolSize"> <value>${jdbc.minPoolSize}</value> </property> <property name="maxPoolSize"> <value>${jdbc.maxPoolSize}</value> </property> <property name="acquireRetryAttempts"> <value>${jdbc.acquireRetryAttempts}</value> </property> <property name="acquireIncrement"> <value>${jdbc.acquireIncrement}</value> </property> <property name="idleConnectionTestPeriod"> <value>${jdbc.idleConnectionTestPeriod}</value> </property> <property name="maxIdleTime"> <value>${jdbc.maxIdleTime}</value> </property> <property name="maxConnectionAge"> <value>${jdbc.maxConnectionAge}</value> </property> <property name="preferredTestQuery"> <value>${jdbc.preferredTestQuery}</value> </property> <property name="testConnectionOnCheckin"> <value>${jdbc.testConnectionOnCheckin}</value> </property> </bean> <bean id="lobHandler" class="org.springframework.jdbc.support.lob.DefaultLobHandler" /> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="/WEB-INF/hibernate.cfg.xml" /> <property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration" /> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> </props> </property> <property name="lobHandler" ref="lobHandler" /> </bean> <tx:annotation-driven transaction-manager="txManager" /> <bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> The part of entity class @Lob @Basic(fetch=FetchType.LAZY) @Column(name = "BlobField", columnDefinition = "LONGBLOB") @Type(type = "org.springframework.orm.hibernate3.support.BlobByteArrayType") private byte[] blobField; The problem description. I'm trying to display on a web page database records related to files, which was saved in MySQL database. All works fine if a volume of data is small. But the volume of data is big I'm recieving an error "java.lang.OutOfMemoryError: Java heap space" I've tried to write in blobFields null values on each row of table. In this case, application works fine, memory doesn't go out of. I have a conclusion that the blob field which is marked as lazy (@Basic(fetch=FetchType.LAZY)) isn't lazy, actually! How can I solve the issie? Many thanks for advance.

    Read the article

  • Fluent NHibernate Many to one mapping

    - by Jit
    I am creating a NHibenate application with one to many relationship. Like City and State data. City table CREATE TABLE [dbo].[State]( [StateId] [varchar](2) NOT NULL primary key, [StateName] [varchar](20) NULL) CREATE TABLE [dbo].[City]( [Id] [int] primary key IDENTITY(1,1) NOT NULL , [State_id] [varchar](2) NULL refrences State(StateId), [CityName] [varchar](50) NULL) My mapping is follows public CityMapping() { Id(x = x.Id); Map(x = x.State_id); Map(x = x.CityName); HasMany(x = x.EmployeePreferedLocations) .Inverse() .Cascade.SaveUpdate() ; References(x = x.State) //.Cascade.All(); //.Class(typeof(State)) //.Not.Nullable() .Cascade.None() .Column("State_id") ; } public StateMapping() { Id(x => x.StateId) .GeneratedBy.Assigned(); Map(x => x.StateName); HasMany(x => x.Jobs) .Inverse(); //.Cascade.SaveUpdate(); HasMany(x => x.EmployeePreferedLocations) .Inverse(); HasMany(x => x.Cities) // .Inverse() .Cascade.SaveUpdate() //.Not.LazyLoad() ; } Models are as follows: [Serializable] public partial class City { public virtual System.String CityName { get; set; } public virtual System.Int32 Id { get; set; } public virtual System.String State_id { get; set; } public virtual IList<EmployeePreferedLocation> EmployeePreferedLocations { get; set; } public virtual JobPortal.Data.Domain.Model.State State { get; set; } public City(){} } public partial class State { public virtual System.String StateId { get; set; } public virtual System.String StateName { get; set; } public virtual IList<City> Cities { get; set; } public virtual IList<EmployeePreferedLocation> EmployeePreferedLocations { get; set; } public virtual IList<Job> Jobs { get; set; } public State() { Cities = new List<City>(); EmployeePreferedLocations = new List<EmployeePreferedLocation>(); Jobs = new List<Job>(); } //public virtual void AddCity(City city) //{ // city.State = this; // Cities.Add(city); //} } My Unit Testing code is below. City city = new City(); IRepository<State> rState = new Repository<State>(); Dictionary<string, string> critetia = new Dictionary<string, string>(); critetia.Add("StateId", "TX"); State frState = rState.GetByCriteria(critetia); city.CityName = "Waco"; city.State = frState; IRepository<City> rCity = new Repository<City>(); rCity.SaveOrUpdate(city); City frCity = rCity.GetById(city.Id); The problem is , I am not able to insert record. The error is below. "Invalid index 2 for this SqlParameterCollection with Count=2." But the error will not come if I comment State_id mapping field in the CityMapping file. I donot know what mistake is I did. If do not give the mapping Map(x = x.State_id); the value of this field is null, which is desired. Please help me how to solve this issue.

    Read the article

  • Confusion testing fftw3 - poisson equation 2d test

    - by user3699736
    I am having trouble explaining/understanding the following phenomenon: To test fftw3 i am using the 2d poisson test case: laplacian(f(x,y)) = - g(x,y) with periodic boundary conditions. After applying the fourier transform to the equation we obtain : F(kx,ky) = G(kx,ky) /(kx² + ky²) (1) if i take g(x,y) = sin (x) + sin(y) , (x,y) \in [0,2 \pi] i have immediately f(x,y) = g(x,y) which is what i am trying to obtain with the fft : i compute G from g with a forward Fourier transform From this i can compute the Fourier transform of f with (1). Finally, i compute f with the backward Fourier transform (without forgetting to normalize by 1/(nx*ny)). In practice, the results are pretty bad? (For instance, the amplitude for N = 256 is twice the amplitude obtained with N = 512) Even worse, if i try g(x,y) = sin(x)*sin(y) , the curve has not even the same form of the solution. (note that i must change the equation; i divide by two the laplacian in this case : (1) becomes F(kx,ky) = 2*G(kx,ky)/(kx²+ky²) Here is the code: /* * fftw test -- double precision */ #include <iostream> #include <stdio.h> #include <stdlib.h> #include <math.h> #include <fftw3.h> using namespace std; int main() { int N = 128; int i, j ; double pi = 3.14159265359; double *X, *Y ; X = (double*) malloc(N*sizeof(double)); Y = (double*) malloc(N*sizeof(double)); fftw_complex *out1, *in2, *out2, *in1; fftw_plan p1, p2; double L = 2.*pi; double dx = L/((N - 1)*1.0); in1 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); out2 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); out1 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); in2 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); p1 = fftw_plan_dft_2d(N, N, in1, out1, FFTW_FORWARD,FFTW_MEASURE ); p2 = fftw_plan_dft_2d(N, N, in2, out2, FFTW_BACKWARD,FFTW_MEASURE); for(i = 0; i < N; i++){ X[i] = -pi + (i*1.0)*2.*pi/((N - 1)*1.0) ; for(j = 0; j < N; j++){ Y[j] = -pi + (j*1.0)*2.*pi/((N - 1)*1.0) ; in1[i*N + j][0] = sin(X[i]) + sin(Y[j]) ; // row major ordering //in1[i*N + j][0] = sin(X[i]) * sin(Y[j]) ; // 2nd test case in1[i*N + j][1] = 0 ; } } fftw_execute(p1); // FFT forward for ( i = 0; i < N; i++){ // f = g / ( kx² + ky² ) for( j = 0; j < N; j++){ in2[i*N + j][0] = out1[i*N + j][0]/ (i*i+j*j+1e-16); in2[i*N + j][1] = out1[i*N + j][1]/ (i*i+j*j+1e-16); //in2[i*N + j][0] = 2*out1[i*N + j][0]/ (i*i+j*j+1e-16); // 2nd test case //in2[i*N + j][1] = 2*out1[i*N + j][1]/ (i*i+j*j+1e-16); } } fftw_execute(p2); //FFT backward // checking the results computed double erl1 = 0.; for ( i = 0; i < N; i++) { for( j = 0; j < N; j++){ erl1 += fabs( in1[i*N + j][0] - out2[i*N + j][0]/N/N )*dx*dx; cout<< i <<" "<< j<<" "<< sin(X[i])+sin(Y[j])<<" "<< out2[i*N+j][0]/N/N <<" "<< endl; // > output } } cout<< erl1 << endl ; // L1 error fftw_destroy_plan(p1); fftw_destroy_plan(p2); fftw_free(out1); fftw_free(out2); fftw_free(in1); fftw_free(in2); return 0; } I can't find any (more) mistakes in my code (i installed the fftw3 library last week) and i don't see a problem with the maths either but i don't think it's the fft's fault. Hence my predicament. I am all out of ideas and all out of google as well. Any help solving this puzzle would be greatly appreciated. note : compiling : g++ test.cpp -lfftw3 -lm executing : ./a.out output and i use gnuplot in order to plot the curves : (in gnuplot ) splot "output" u 1:2:4 ( for the computed solution )

    Read the article

  • Issue with 'selected' value within form

    - by JM4
    I currently have a form built in which after validation, if errors exist, the data stays on screen for the consumer to correct. An example of how this works for say the 'Year of Birth' is: <select name="DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> If an error is found, the year of birth value returns the year previously entered. I am able to have this work on all field with the exception of my 'State' field. I build the array and function for the drop down with the following code: <?php $states_arr = array('AL'=>"Alabama",'AK'=>"Alaska",'AZ'=>"Arizona",'AR'=>"Arkansas",'CA'=>"California",'CO'=>"Colorado",'CT'=>"Connecticut",'DE'=>"Delaware",'DC'=>"District Of Columbia",'FL'=>"Florida",'GA'=>"Georgia",'HI'=>"Hawaii",'ID'=>"Idaho",'IL'=>"Illinois", 'IN'=>"Indiana", 'IA'=>"Iowa", 'KS'=>"Kansas",'KY'=>"Kentucky",'LA'=>"Louisiana",'ME'=>"Maine",'MD'=>"Maryland", 'MA'=>"Massachusetts",'MI'=>"Michigan",'MN'=>"Minnesota",'MS'=>"Mississippi",'MO'=>"Missouri",'MT'=>"Montana",'NE'=>"Nebraska",'NV'=>"Nevada",'NH'=>"New Hampshire",'NJ'=>"New Jersey",'NM'=>"New Mexico",'NY'=>"New York",'NC'=>"North Carolina",'ND'=>"North Dakota",'OH'=>"Ohio",'OK'=>"Oklahoma", 'OR'=>"Oregon",'PA'=>"Pennsylvania",'RI'=>"Rhode Island",'SC'=>"South Carolina",'SD'=>"South Dakota",'TN'=>"Tennessee",'TX'=>"Texas",'UT'=>"Utah",'VT'=>"Vermont",'VA'=>"Virginia",'WA'=>"Washington",'WV'=>"West Virginia",'WI'=>"Wisconsin",'WY'=>"Wyoming"); function showOptionsDrop($array, $active, $echo=true){ $string = ''; foreach($array as $k => $v){ $s = ($active == $k)? ' selected="selected"' : ''; $string .= '<option value="'.$k.'"'.$s.'>'.$v.'</option>'."\n"; } if($echo) { echo $string;} else { return $string;} } ?> I then call the function from within the form using: <td><select name="State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> Not sure what I'm missing but would love any assistance if somebody sees the error in my code. Thanks!

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99  | Next Page >