Search Results

Search found 1864 results on 75 pages for 'dump'.

Page 65/75 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Windows 8.1 IRQL_NOT_LESS_OR_EQUAL with Asus PCE-n53

    - by JArsenault89
    I saw the following question, and it is the exact same problem on my machine, I have tracked it to the ASUS PCE-n53 wireless card in my desktop. Does anyone know of a workaround? Windows 8.1 RTM installation crashes The adapter worked fine in windows 8... any ideas? EDIT: Crash Dump Analysis * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 0000000000000000, memory referenced Arg2: 0000000000000002, IRQL Arg3: 0000000000000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: fffff801ef4f1316, address which referenced memory Debugging Details: WRITE_ADDRESS: 0000000000000000 CURRENT_IRQL: 2 FAULTING_IP: nt!KeReleaseSpinLock+16 fffff801`ef4f1316 f048832100 lock and qword ptr [rcx],0 DEFAULT_BUCKET_ID: WIN8_DRIVER_FAULT BUGCHECK_STR: AV PROCESS_NAME: System ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre TRAP_FRAME: ffffd00020d45550 -- (.trap 0xffffd00020d45550) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000001 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000055920200 rsi=0000000000000000 rdi=0000000000000000 rip=fffff801ef4f1316 rsp=ffffd00020d456e0 rbp=ffffd00020d45768 r8=0000000055920222 r9=0000000035930000 r10=0000000055920222 r11=ffffd00020d456a8 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!KeReleaseSpinLock+0x16: fffff801ef4f1316 f048832100 lock and qword ptr [rcx],0 ds:0000000000000000=???????????????? Resetting default scope LOCK_ADDRESS: fffff801ef6da360 -- (!locks fffff801ef6da360) Resource @ nt!PiEngineLock (0xfffff801ef6da360) Exclusively owned Contention Count = 6 Threads: ffffe000010ff040-01<* 1 total locks, 1 locks currently held PNP_TRIAGE: Lock address : 0xfffff801ef6da360 Thread Count : 1 Thread address: 0xffffe000010ff040 Thread wait : 0x1fbe LAST_CONTROL_TRANSFER: from fffff801ef5647e9 to fffff801ef558ca0 STACK_TEXT: ffffd00020d45408 fffff801ef5647e9 : 000000000000000a 0000000000000000 0000000000000002 0000000000000001 : nt!KeBugCheckEx ffffd00020d45410 fffff801ef56303a : 0000000000000001 0000000000000000 ffff0c83e3e25300 ffffd00020d45550 : nt!KiBugCheckDispatch+0x69 ffffd00020d45550 fffff801ef4f1316 : 00000000000a5890 0000000000000001 0000000000000000 ffffe00004c00000 : nt!KiPageFault+0x23a ffffd00020d456e0 fffff80003b430ad : 00000000000afe80 ffffe00004c00000 00000000000a2f80 0000000035720000 : nt!KeReleaseSpinLock+0x16 ffffd00020d45710 fffff80003ac249f : ffffe00004c00000 00000000000000a8 ffffe00004c85050 0000000000000800 : netr28x+0x840ad ffffd00020d457b0 fffff80000b76475 : ffffd00020d459e8 ffffd00020d459f0 ffffe00004ac2006 ffffe00004ac21a0 : netr28x+0x349f ffffd00020d459a0 fffff80000baa248 : ffffe00004ac2eb8 0000000000000000 ffffe00000000000 ffffe00004ac21a0 : ndis!ndisMInvokeInitialize+0x39 ffffd00020d459e0 fffff80000b74784 : 0000000000000050 ffffe00004907ba0 0000000000000000 01cecbbc328e6cde : ndis!ndisMInitializeAdapter+0x4dc ffffd00020d46050 fffff80000b74d3d : 0000000000000050 ffffe0000443e770 ffffc00000951480 ffffe00004ac21a0 : ndis!ndisInitializeAdapter+0x60 ffffd00020d460a0 fffff80000b74c14 : ffffe00004ac21a0 ffffe00004ac2050 ffffe000047ec2a0 0000000000000000 : ndis!ndisPnPStartDevice+0x89 ffffd00020d460f0 fffff80000b87695 : ffffe00004ac21a0 ffffe00004ac21a0 ffffd00020d461b0 ffffe000047ec2a0 : ndis!ndisStartDeviceSynchronous+0x58 ffffd00020d46140 fffff80000b6a760 : ffffe000047ec2a0 ffffe00004ac21a0 0000000000000000 0000000000000000 : ndis!ndisPnPIrpStartDevice+0x13471 ffffd00020d46170 fffff8000032576c : ffffe00004b11501 ffffe00004b11570 0000000000000001 fffff80000325880 : ndis!ndisPnPDispatch+0x140 ffffd00020d461e0 fffff8000030b40a : ffffe000047ec2a0 0000000000000106 ffffd00020d462f0 ffffe00004b116c0 : Wdf01000!FxPkgFdo::PnpSendStartDeviceDownTheStackOverload+0xe8 ffffd00020d46250 fffff80000305942 : 0000000000000106 ffffd00020d462f0 0000000000000105 ffffd00020d464d0 : Wdf01000!FxPkgPnp::PnpEventInitStarting+0xa ffffd00020d46280 fffff80000305a5a : ffffe00004b116c8 0000000000000002 ffffe00004b11570 ffffe00004b11600 : Wdf01000!FxPkgPnp::PnpEnterNewState+0x102 ffffd00020d46310 fffff80000305bc4 : 0000000000000000 ffffd00020d46400 ffffe00004b116a0 0000000000000000 : Wdf01000!FxPkgPnp::PnpProcessEventInner+0xc2 ffffd00020d46390 fffff8000030c27a : 0000000000000000 ffffe00004b11570 0000000000000000 ffffe00004b11570 : Wdf01000!FxPkgPnp::PnpProcessEvent+0xe4 ffffd00020d46430 fffff80000300936 : ffffe00004b11570 ffffd00020d464c0 0000000000000000 ffffe00004a0e630 : Wdf01000!FxPkgPnp::_PnpStartDevice+0x1e ffffd00020d46460 fffff800002fba18 : ffffe000047ec2a0 ffffe000047ec2a0 0000000000000000 ffffe0000486f020 : Wdf01000!FxPkgPnp::Dispatch+0xd2 ffffd00020d464d0 fffff801ef838796 : 0000000000000000 fffff801ef6aa101 0000000000000000 ffffd000208aa180 : Wdf01000!FxDevice::DispatchWithLock+0x7d8 ffffd00020d465b0 fffff801ef4d5bad : ffffe000011dc3a0 ffffd00020d46659 0000000000000000 fffff801ef7f5ba4 : nt!PnpAsynchronousCall+0x102 ffffd00020d465f0 fffff801ef838e57 : ffffe000011db8d0 ffffe000011db8d0 ffffe00004a8d060 ffffc00002b11200 : nt!PnpStartDevice+0xc5 ffffd00020d466c0 fffff801ef838fe7 : ffffe000011db8d0 ffffe000011db8d0 0000000000000000 ffffe000011db8d0 : nt!PnpStartDeviceNode+0x147 ffffd00020d46790 fffff801ef7fd19e : ffffe000011db8d0 0000000000000001 0000000000000001 ffffe00000000001 : nt!PipProcessStartPhase1+0x53 ffffd00020d467d0 fffff801ef897b17 : ffffe000011db8d0 0000000000000001 0000000000000000 fffff801ef7ef7b2 : nt!PipProcessDevNodeTree+0x3ce ffffd00020d46a50 fffff801ef4f5033 : 0000000100000003 0000000000000000 0000000000000000 0000000000000000 : nt!PiRestartDevice+0xaf ffffd00020d46aa0 fffff801ef44565d : fffff801ef4f4c90 ffffd00020d46bd0 0000000000000000 ffffe00004a10170 : nt!PnpDeviceActionWorker+0x3a3 ffffd00020d46b50 fffff801ef4eec80 : 0000000000000000 ffffe000010ff040 ffffe000010ff040 ffffe0000035c900 : nt!ExpWorkerThread+0x2b5 ffffd00020d46c00 fffff801ef55f2c6 : ffffd00020472180 ffffe000010ff040 ffffe00000608040 ffffc00000002710 : nt!PspSystemThreadStartup+0x58 ffffd00020d46c60 0000000000000000 : ffffd00020d47000 ffffd00020d41000 0000000000000000 0000000000000000 : nt!KiStartSystemThread+0x16 STACK_COMMAND: kb FOLLOWUP_IP: netr28x+840ad fffff800`03b430ad 4533e4 xor r12d,r12d SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: netr28x+840ad FOLLOWUP_NAME: MachineOwner MODULE_NAME: netr28x IMAGE_NAME: netr28x.sys DEBUG_FLR_IMAGE_TIMESTAMP: 51de7a8d FAILURE_BUCKET_ID: AV_netr28x+840ad BUCKET_ID: AV_netr28x+840ad ANALYSIS_SOURCE: KM FAILURE_ID_HASH_STRING: km:av_netr28x+840ad FAILURE_ID_HASH: {a1f86ced-f566-ac23-afeb-1aa88ea5ab8f} Followup: MachineOwner

    Read the article

  • Ubuntu server 10.04 doesn't boot into installed Gnome desktop automatically

    - by Tong Wang
    I've installed Ubuntu server 10.04 and then installed Gnome desktop on top of it, because I am new to Linux and its command line, I need the GUI desktop to help me get around. However, the problem I got is that the server doesn't boot into the GUI desktop when powered on. It's booting into a shell like this: Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enought?) - check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/cecdata-root does not exist. Dropping to a shell! BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) result of (cat /proc/cmdline) BOOT_IMAGE=/vmlinuz-2.6.32-28-server root=/dev/mapper/cecdata-root ro quiet Then I have type "exit" to exit the shell and then it boots into Gnome. Any idea what's wrong? Edit: add output for the following commands wt@cecdata:~$ ls /dev/mapper/ cecdata-root cecdata-swap_1 control wt@cecdata:~$ fdisk -l wt@cecdata:~$ wt@cecdata:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/cecdata-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=1635be41-d025-405e-b4a3-6f0abedb7aab /boot ext2 defaults 0 2 /dev/mapper/cecdata-swap_1 none swap sw 0 0 wt@cecdata:~$ Adding output for lsmod wt@cecdata:~$ lsmod Module Size Used by fbcon 39270 71 tileblit 2487 1 fbcon font 8053 1 fbcon bitblit 5811 1 fbcon softcursor 1565 1 bitblit dell_wmi 2177 0 dcdbas 6918 0 vga16fb 12757 1 vgastate 9857 1 vga16fb psmouse 64576 0 serio_raw 4950 0 power_meter 9473 0 bnx2 72874 0 lp 9336 0 parport 37160 1 lp mptsas 50592 2 usbhid 41116 0 mptscsih 37167 1 mptsas hid 83568 1 usbhid mptbase 91674 2 mptsas,mptscsih scsi_transport_sas 33021 1 mptsas

    Read the article

  • Archlinux/atheros WLAN configuration troubles

    - by GrinReaper
    I'm trying to config archlinux to use my wireless network adapter. It's quite troublesome. From what I've gathered, it's an atheros network adapter, using the ath5k driver/module... I can't get it to work; any ideas? Here's some of the output from my tinkering: # lspci | grep -i net 00:0a.0 Ethernet controller: nVidia corporation MCP67 Ethernet (reva2) 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) # lsusb ... Bus 004 Device 003: ID 03f0:17d Hewlett Packard Wireless (Bluetooth + WLAN Interface [Integrated Module] # ping -c 3 www.google.com ping: unknown host www.google.com #ping -c 3 8.8.8.8 ping: network is unreachable # lspci -v 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) ... Kernel driver in use: ath5k Kernel modules: ath5k # dmesg |grep ath5k registered as phy0 registered led device ath5k: atheros chip found PCI INT A disabled registered led device registered as phy1 # ip addr | sed '/^[0-9]/!d;s/: <.*$//' 1: lo 2: eth1 3: eth0 # ip link set <interface> up/down RNETLINK answers: Operation not possible due to RF-kill Also, is there a way to dump text from command-line to a text file so i can just copy pasta? Sorry, first time using a linux distro... EDIT: So I just tried this: I actually just did this twice. (I can't tell which setting is on/off for my wireless adapter. The lights are blue all the time now.) #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :yes 1: hp-bluetooth: bluetooth softblocked: no hardblocked :yes 3: phy1: wireless lan softblocked: no hardblocked :yes #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :no 1: hp-bluetooth: bluetooth softblocked: no hardblocked no 3: phy1: wireless lan softblocked: no hardblocked :yes 7: hci0: bluetooh 0: hp-wifi: wireless lan softblocked: no hardblocked :no I've dug around some other articles and it seems like ath5k is supposed to be preferable to madwifi, so should i be using madwifi? I'm 99% sure I disabled the hardblock (by turning it ON) but, as shown above, phy1 wireless lan is STILL hardblocked. What gives? Maybe I've made some more fundamental error in a basic config file? EDIT: I've fixed the hardblock. I've tried pinging www.google.com, but to no avail. I get: ping: unknown host www.google.com In the arch wiki: Edit /etc/hosts and add the same HOSTNAME you entered in /etc/rc.conf: 127.0.0.1 archlinux.domain.org localhost.localdomain localhost archlinux To my understanding, hostname is just a user-specified and based on preference(?) My /etc/rc.conf: HOSTNAME="gestalt" My /etc/hosts: 127.0.0.1 localhost.localdomain localhost gestalt but should it be the following? 120.0.0.1 localhost.domain.org localhost.localdomain localhost gestalt

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • FTP could not connect after applying local DNS(private DNS)

    - by Rahul
    I made a software router in CentOS linux and in that made a DNS server. I am using centOS 6..4 for making DNS i applied following steps: changed the host name = abc.zoom.com and domain name = zoom.com. then did changes in the named.rfc.1912 file as per rename named.localhost = forward and named.loopback = reverse in forward lookups i changed zone "zoom.com" IN { type master; file "forward"; allow-update { none; }; and in reverse lookups i changed zone "x.168.192.in-addr.arpa" IN { type master; file "reverse"; allow-update { none; }; and then did changes in the named.conf file options { listen-on port 53 {192.168.x.x;}; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {any;}; recursion yes; 192.168.x.x is my local DNS address. then i copied lookups file in /var/named and edited the file "forward" $TTL 1D @ IN SOA abc.zoom.com. rahul.abc.zoom.com. ( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum NS abc.zoom.com. abc A 192.168.x.x and for " reverse" $TTL 1D @ IN SOA abc.zoom.com. rahul.abc.zoom.com.( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum NS abc.zoom.com. x PTR abc.zoom.com. when i put the public ip details in the Eth0 it was automatically redirect in to the resolve.conf when i checked through dig command the answer, query all were 1. my system is itself a Software router.In gateway of my all local machine i give my system ip address. however my DNS and Gateway IP is same. Now the problem is that. i gave the static ips to all my local machines when i give the DNS which i made i.e 192.168.x.x that time my ftp is not connect in filezilla software E.g: host : pqr.zoom.com ("zoom.com" is my local domain name) username : pqr password : pqr gives an error: Error: Connection timed out Error: Could not connect to server but if i give the public DNS address it get connected. i want to solve this problem please give solution on this.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • I am unable to connect to my netbook from any machine on my network until the netbook has pinged it

    - by Samuel Husky
    I have a rather strange issue with my netbook on my local network. When trying to connect to it in any way from a remote system it does not appear to find it. However if I get the netbook to ping the machine trying to connect it mystically appears to work. Below is the ping test from my main PC to the netbook. C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Now a ping from the netbook to my main PC sam@malamute ~ $ ping 192.168.8.100 PING 192.168.8.100 (192.168.8.100) 56(84) bytes of data. 64 bytes from 192.168.8.100: icmp_req=1 ttl=128 time=2.46 ms 64 bytes from 192.168.8.100: icmp_req=2 ttl=128 time=0.835 ms 64 bytes from 192.168.8.100: icmp_req=3 ttl=128 time=1.60 ms 64 bytes from 192.168.8.100: icmp_req=4 ttl=128 time=1.32 ms 64 bytes from 192.168.8.100: icmp_req=5 ttl=128 time=1.34 ms ^C --- 192.168.8.100 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 0.835/1.514/2.460/0.536 ms And the same ping again from the main PC after the netbook has made a connection to it C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms The netbook is running Gentoo and is currently connected via wireless. My main PC is running Windows 7 however I get the same result no matter what PC I use on this network. Please see this example from a CentOS machine on the same network [root@tiger ~]# ping 192.168.8.102 PING 192.168.8.102 (192.168.8.102) 56(84) bytes of data. From 192.168.8.200 icmp_seq=2 Destination Host Unreachable From 192.168.8.200 icmp_seq=3 Destination Host Unreachable From 192.168.8.200 icmp_seq=4 Destination Host Unreachable --- 192.168.8.102 ping statistics --- 6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5000ms , pipe 3 If you need any more information or require logs or config files please let me know and any assistance is greatly appreciated. Additional info: No responses on TCP dump from the netbook. Same result when booting into Ubuntu from a USB key. No issue when using a wired Ethernet connection.

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • Anyone else experiencing high rates of Linux server crashes during a leap second day?

    - by Bron Gondwana
    POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine. Starting back ntp everywhere. If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. Give that a go if it matters to you. For the systems I run, the jump isn't a problem. Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex. I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex - it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl, it will be used if the system one isn't present. Obviously if you don't have squeeze 64 bit... find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter.

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • What is the probable failure - no BSOD, no event log, monitors sleeping, force reboot required

    - by Tyler
    Every 3 to 15 days, my PC freezes. This typically happens when the computer is idle, I'm coming home from work, back from vacation, etc. It's never happened while using my computer. The monitors are in power save mode The Caps Lock light on the (wireless) keyboard doesn't work Ctrl-alt-del has no effect, mouse (wireless) has no effect The hardware reset button and single press of power putton have no effect Computer does not appear on the network No BSOD, no memory dump Event logs have no errors or indications of problems near the time of crash. Only messages after reboot indicating that there was a reboot without a clean shutdown. Windows is set to never put the computer to sleep (just the display) Here are the vital stats of the build: OS Windows 8 Pro 64-bit CPU Intel i5-2400 Mobo Intel BOXDP67DE Micro ATX GPU MSI N460GTX Cyclone768D5/OC RAM CORSAIR XMS3 8GB (2 x 4GB) CMX8GX3M2A1333C9 PSU SeaSonic X Series X650 Gold System Drive Samsung 840 Pro 256 GB SSD Data Drive 2 x Western Digital WD20EARS 2TB in hardware RAID 1 Optical Lite-On DVD burner IHAS424-98 And here is the story of how the problem developed and what I've done to diagnose: January 2011, system built with Windows 7 64-bit, runs great. March 2011, Intel replaced the mobo because of the bad sata controllers. October 2012, upgrade to Windows 8 (problems start shortly after). January 2013, system freezes and causes network to fail for the whole house. Unplug the network cable and other devices and PCs can use the internet. Plug it back in, internet goes away for everyone. Reboot and everything is fine. March 2013, install Intel Gigabit CT PCI-E NIC, disable mobo nic in bios. Network strangeness goes away. Freezes are less frequent. Memtest shows no problems (20 passes). Early June 2013, replace Antec PSU with SeaSonic PSU. Mid June 2013, replace OCZ Vertex 2 SSD with Samsung SSD. Late June 2013, get frustrated and hope the community has some good ideas (I'm running out of budget to replace parts). My next plan of attack is setting "Turn off display" to Never and using a screen saver to see how that reacts on the next freeze. It makes me sad to waste power for up to 15 days though. Has anyone out there seen a problem like this? Any ideas on what kind of malfunction would act this way? Ideas of other diagnostic steps to take?

    Read the article

  • Caching issue with Centos forwarding DNS server

    - by Paddington
    I installed a Forwarding DNS server on Centos 5.10 and it is resolving addresses e.g google.com. When I stopped named (service named stop) and tried to dig (dig @localhost A google.com) there was a failure to resolve the address. I checked and see the caching daemon nscd is running. Does this mean the server is not caching at all? How can I get it to cache? named.conf options { // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; // Put files that named is allowed to write in the data/ directory: listen-on port 53 {127.0.0.1; 10.0.0.4;}; directory "/var/named"; // the default dump-file "/var/named/chroot/var/named/data/cache_dump.db"; statistics-file "/var/named/chroot/var/named/data/named_stats.txt"; memstatistics-file "/var/named/chroot/var/named/data/named_mem_stats.txt"; // allow-query {localhost; 192.168.0.0/24; 10.0.0.0/8;}; recursion yes; //allow-query { localhost; 10.0.0.0/8;}; allow-query { localhost; any; }; allow-query-cache { localhost; any; }; forward only; forwarders {8.8.8.8; 8.8.4.4;}; dnssec-enable yes; // dnssec-lookaside auto; /* Path to ISC DLV key */ // bindkeys-file "/etc/named.iscdlv.key"; // managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; **

    Read the article

  • Setting Up My Server to Do DNS On OpenSuse 11.3

    - by adaykin
    Hello, I am attempting to use my server to be a DNS server. I am having trouble getting the domain setup. Here is what I have so far: /var/lib/named/master/andydaykin.com: $TTL 2d @ IN SOA andydaykin.com. root.andydaykin.com. ( 2011011000 ; serial 0 ; refresh 0 ; retry 0 ; expiry 0 ) ; minimum andydaykin.com. IN NS ns1.andydaykin.com. andydaykin.com. IN SOA ns1.andydaykin.com. hostmaster.andydaykin.com. ( @.andydaykin.com. IN NS ns1.andydaykin.com. ns1.andydaykin.com. IN A 204.12.227.33 www.andydaykin.com. IN A 204.12.227.33 /etc/resolve.conf: search andydaykin.com nameserver 204.12.227.33 /etc/named.conf: options { # The directory statement defines the name server's working directory directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; listen-on-v6 { any; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; include "/etc/named.d/forwarders.conf"; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; Include the meta include file generated by createNamedConfInclude. This includes all files as configured in NAMED_CONF_INCLUDE_FILES from /etc/sysconfig/named include "/etc/named.conf.include"; zone "andydaykin.com" in { file "master/andydaykin.com"; type master; allow-transfer { any; }; }; logging { category default { log_syslog; }; channel log_syslog { syslog; }; }; What I am doing wrong?

    Read the article

  • HP ProLiant DL380 G3 Running Windows Server 2000 has crashed between 6-7:30am for the past 5 days

    - by user109717
    I have a HP ProLiant DL380 G3 running Windows Server 2000 that has been crashing everyday between 6-730am. This started when I changed out a failing hard drive 6 days ago. I have looked at the scheduled tasks which does not have anything pertaining to this issue. Below are the only things I see in the system log and some of the dump files. Can this be a hardware issue if this happens at a certain time frame everyday? Any help is greatly appreciated. Thanks The previous system shutdown at 6:07:55 AM on 2/7/2012 was unexpected. System Information Agent: Health: The server is operational again. The server has previously been shutdown by the Automatic Server Recovery (ASR) feature and has just become operational again. [SNMP TRAP: 6025 in CPQHLTH.MIB] BugCheck 7A, {3, c0000005, 3400028, 0} Probably caused by : memory_corruption ( nt!MiMakeSystemAddressValidPfn+42 ) Followup: MachineOwner 0: kd !analyze -v * Bugcheck Analysis * * KERNEL_DATA_INPAGE_ERROR (7a) The requested page of kernel data could not be read in. Typically caused by a bad block in the paging file or disk controller error. Also see KERNEL_STACK_INPAGE_ERROR. If the error status is 0xC000000E, 0xC000009C, 0xC000009D or 0xC0000185, it means the disk subsystem has experienced a failure. If the error status is 0xC000009A, then it means the request failed because a filesystem failed to make forward progress. Arguments: Arg1: 00000003, lock type that was held (value 1,2,3, or PTE address) Arg2: c0000005, error status (normally i/o status code) Arg3: 03400028, current process (virtual address for lock type 3, or PTE) Arg4: 00000000, virtual address that could not be in-paged (or PTE contents if arg1 is a PTE address) MODULE_NAME: nt IMAGE_NAME: memory_corruption BugCheck A, {0, 2, 1, 804137d6} Probably caused by : ntkrnlmp.exe ( nt!CcGetVirtualAddress+ba ) * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 00000000, memory referenced Arg2: 00000002, IRQL Arg3: 00000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: 804137d6, address which referenced memory MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • Building a database installer with WiX, datadude and Visual Studio 2010

    - by jamiet
    Today I have been using Windows Installer XML (WiX) to build an installer (.msi file) that would install a SQL Server database on a server of my choosing; the source code for that database lives in datadude (a tool which you may know by one of quite a few other names). The basis for this work was a most excellent blog post by Duke Kamstra entitled Implementing a WIX installer that calls the GDR version of VSDBCMD.EXE which coves the delicate intricacies of doing this – particularly how to call Vsdbcmd.exe in a CustomAction. Unfortunately there are a couple of things wrong with Duke’s post: Searching for “datadude wix” didn’t turn it up in the first page of search results and hence it took me a long time to find it. And I knew that it existed. If someone else were after a post on using WiX with datadude its likely that they would never have come across Duke’s post and that would be a great shame because its the definitive post on the matter. It was written in October 2009 and had not been updated for Visual Studio 2010. Well, this blog post is an attempt to solve those problems. Hopefully I’ve solved the first one just by following a few of my blogging SEO tips while writing this blog post, in the rest of it I will explain how I took Duke’s code and updated it to work in Visual Studio 2010. If you need to build a database installer using WiX, datadude and Visual Studio 2010 then you still need to follow Duke’s blog post so go and do that now. Below are the amendments that I made that enabled the project to get built in Visual Studio 2010: In VS2010 datadude’s output files have changed from being called Database.<suffix> to <ProjectName>_Database.<suffix>. Duke’s code was referencing the old file name formats. Duke used $(var.SolutionDir) and relative paths to point to datadude artefacts I have replaced these with Votive Project References http://wix.sourceforge.net/manual-wix3/votive_project_references.htm I commented out all references to MicrosoftSqlTypesDbschema in DatabaseArtifacts.wxi. I don't think this is produced in VS2010 (I may be wrong about that but it wasn't in the output from my project) Similarly I commented out component MicrosoftSqlTypesDbschema in VsdbcmdArtifacts.wxi. It wasn't where Duke's code said it should have been so am assuming/hoping it isn't needed. Duke's ?define block to work out appropriate SrcArchPath actually wasn't working for me (i.e. <?if $(var.Platform)=x64 ?> was evaluating to false)  so I just took out the conditional stuff and declared the path explicitly to the “Program Files (x86)” path. The old code is still there though if you need to put it back. None of the <RegistrySearch> stuff is needed for VS2010 - so I commented it all out! Changed to use /manifest option rather than /model option on vsdbcmd.exe command-line. Personal preference is all! Added a new component in order to bundle along the vsdbcmd.exe.config file Made the install of the Custom Action dependent on the relevant feature being selected for install. This one is actually really important – deselecting the database feature for installation does not, by default, stop the CustomAction from executing and so would cause an error - so that scenario needs to be catered for I have made my amended solution available for download at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.zip It contains two projects: the WiX project and the datadude project that is the source to be deployed (for demo purposes it only contains one table). I have also made the .msi available although in order that it gets through file blockers I changed the name from InstallMyDatabase.msi to InstallMyDatabase.ms_ – simply rename the file back once you have downloaded it from: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.ms%5E_ .You can try it out for yourself – the only thing it does is dump the files into %Program Files%\MyDatabase and uses them to install a database onto a server of your choosing with a name of your choosing - no damaging side-affects. I will caveat this by saying “it works on my machine” and, not having access to a plethora of different machines, I haven’t tested it anywhere else. One potential issue that I know of is that Vsdbcmd.exe has a dependency on SQL Server CE although if you have SQL Server tools or Visual Studio installed you should be fine. Unfortunately its not possible to bundle along the SQL Server CE installer in the .msi because Windows will not allow you to call one installer from inside another – the recommended way to get around this problem is to build a bootstrapper to bundle the whole lot together but doing that is outside the scope of this blog post. If you discover any other issues then please let me know. Here are the screenshots from the installer: And once installed…. Hope this is useful! @jamiet 

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • WebLogic Server Performance and Tuning: Part I - Tuning JVM

    - by Gokhan Gungor
    Each WebLogic Server instance runs in its own dedicated Java Virtual Machine (JVM) which is their runtime environment. Every Admin Server in any domain executes within a JVM. The same also applies for Managed Servers. WebLogic Server can be used for a wide variety of applications and services which uses the same runtime environment and resources. Oracle WebLogic ships with 2 different JVM, HotSpot and JRocket but you can choose which JVM you want to use. JVM is designed to optimize itself however it also provides some startup options to make small changes. There are default values for its memory and garbage collection. In real world, you will not want to stick with the default values provided by the JVM rather want to customize these values based on your applications which can produce large gains in performance by making small changes with the JVM parameters. We can tell the garbage collector how to delete garbage and we can also tell JVM how much space to allocate for each generation (of java Objects) or for heap. Remember during the garbage collection no other process is executed within the JVM or runtime, which is called STOP THE WORLD which can affect the overall throughput. Each JVM has its own memory segment called Heap Memory which is the storage for java Objects. These objects can be grouped based on their age like young generation (recently created objects) or old generation (surviving objects that have lived to some extent), etc. A java object is considered garbage when it can no longer be reached from anywhere in the running program. Each generation has its own memory segment within the heap. When this segment gets full, garbage collector deletes all the objects that are marked as garbage to create space. When the old generation space gets full, the JVM performs a major collection to remove the unused objects and reclaim their space. A major garbage collect takes a significant amount of time and can affect system performance. When we create a managed server either on the same machine or on remote machine it gets its initial startup parameters from $DOMAIN_HOME/bin/setDomainEnv.sh/cmd file. By default two parameters are set:     Xms: The initial heapsize     Xmx: The max heapsize Try to set equal initial and max heapsize. The startup time can be a little longer but for long running applications it will provide a better performance. When we set -Xms512m -Xmx1024m, the physical heap size will be 512m. This means that there are pages of memory (in the state of the 512m) that the JVM does not explicitly control. It will be controlled by OS which could be reserve for the other tasks. In this case, it is an advantage if the JVM claims the entire memory at once and try not to spend time to extend when more memory is needed. Also you can use -XX:MaxPermSize (Maximum size of the permanent generation) option for Sun JVM. You should adjust the size accordingly if your application dynamically load and unload a lot of classes in order to optimize the performance. You can set the JVM options/heap size from the following places:     Through the Admin console, in the Server start tab     In the startManagedWeblogic script for the managed servers     $DOMAIN_HOME/bin/startManagedWebLogic.sh/cmd     JAVA_OPTIONS="-Xms1024m -Xmx1024m" ${JAVA_OPTIONS}     In the setDomainEnv script for the managed servers and admin server (domain wide)     USER_MEM_ARGS="-Xms1024m -Xmx1024m" When there is free memory available in the heap but it is too fragmented and not contiguously located to store the object or when there is actually insufficient memory we can get java.lang.OutOfMemoryError. We should create Thread Dump and analyze if that is possible in case of such error. The second option we can use to produce higher throughput is to garbage collection. We can roughly divide GC algorithms into 2 categories: parallel and concurrent. Parallel GC stops the execution of all the application and performs the full GC, this generally provides better throughput but also high latency using all the CPU resources during GC. Concurrent GC on the other hand, produces low latency but also low throughput since it performs GC while application executes. The JRockit JVM provides some useful command-line parameters that to control of its GC scheme like -XgcPrio command-line parameter which takes the following options; XgcPrio:pausetime (To minimize latency, parallel GC) XgcPrio:throughput (To minimize throughput, concurrent GC ) XgcPrio:deterministic (To guarantee maximum pause time, for real time systems) Sun JVM has similar parameters (like  -XX:UseParallelGC or -XX:+UseConcMarkSweepGC) to control its GC scheme. We can add -verbosegc -XX:+PrintGCDetails to monitor indications of a problem with garbage collection. Try configuring JVM’s of all managed servers to execute in -server mode to ensure that it is optimized for a server-side production environment.

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >