Search Results

Search found 1347 results on 54 pages for 'amnesia 55'.

Page 39/54 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Setting default values for object properties in AS3

    - by Paul T
    I'm an actionscript newbie so please bear with me. Below is a function, and I am curious how to set default property values for objects that are being created in a loop. In the example below, these propeties are the same for each object created in the loop: titleTextField.selectable, titleTextField.wordWrap, titleTextField.x If you pull these properties out of the loop, they are null because the TextField objects have not been created, but it seems silly to have to set them each time. What is the correct way to do this. Thanks! var titleTextFormat:TextFormat = new TextFormat(); titleTextFormat.size = 10; titleTextFormat.font = "Arial"; titleTextFormat.color = 0xfff200; for (var i=0; i<arrThumbPicList.length; i++) { var yPos = 55 * i var titleTextField:TextField = new TextField(); titleTextField.selectable = false; titleTextField.wordWrap = true; titleTextField.text = arrThumbTitles[i]; titleTextField.x = 106; titleTextField.y = 331 + yPos; container.addChild(titleTextField); titleTextField.setTextFormat(titleTextFormat); }

    Read the article

  • Static IP on FEDORA12 from Virtualbox

    - by Krazy_Kaos
    I'm trying to get my FEDORA12 to have an STATIC IP - inside virtualbox - inside Ubuntu Let me rephrase that. I have an Ubuntu 9.04 system with vitualbox and a FEDORA12 vm there and I would like to put the fedora with an STATIC IP (amahi needs it), but I'm getting stuck... I'm using NAT (if that's any help) I tryid a few tutorials, but no go. I'm kind of new to the *nix world but I'm old school on M$ Edit: Screenshots UBUNTU 9.04 (host that has the vm) FEDORA (sory cant post pics... not enough rep) INFO: GUEST WITH STATIC: IFCONFIG: eth0 Link encap:Ethernet HWaddr 08:00:27:35:CC:DE inet addr:192.168.1.55 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe35:ccde/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:2764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:574 (574.0 b) TX bytes:127121 (124.1 KiB) Interrupt:11 Base address:0xc020 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1856 errors:0 dropped:0 overruns:0 frame:0 TX packets:1856 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:181587 (177.3 KiB) TX bytes:181587 (177.3 KiB) NETSTAT -NR: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 GUEST WITH DHCP: IFCONFIG: eth0 Link encap:Ethernet HWaddr 08:00:27:35:CC:DE inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe35:ccde/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105 errors:0 dropped:0 overruns:0 frame:0 TX packets:2966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:49787 (48.6 KiB) TX bytes:149969 (146.4 KiB) Interrupt:11 Base address:0xc020 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1903 errors:0 dropped:0 overruns:0 frame:0 TX packets:1903 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:185931 (181.5 KiB) TX bytes:185931 (181.5 KiB) NETSTAT -NR: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 PS.: I'm still trying to workout the sudoer file to be able to exec the iptables command

    Read the article

  • Replace dual-XP installs with single-XP install and repartition drive?

    - by caeious
    Hello, The Current Situation I have a hard drive that currently is split up like so: Primary Partition C: 9.77 GB NTFS Healthy (System) with XP Pro (in Polish) installed Extended Partition D: 39.82 GB NTFS Healthy (Boot) with XP Pro (in English) installed 6.30 GB Free space When I start my comuter I get a black and white Windows Boot Manager dual boot screen with 2 choices both being Microsoft Windows XP. The first choice is the English version of XP and the second choice is the Polish version of XP. Images of my Computer Management window and Dual Boot screen The Mission What I need to do is get rid of the entire extended partition (D: 39.82 GB & 6.30 free space) and just have the one primary C: drive which I assume will be somewheres around 55 GB big. So in the end I just want XP Pro in English running on my C: drive and no black and white boot screen to show up when starting up my laptop. The Question How do I go about successfully completing The Mission with out making my computer a useless pile of silicon, plastic and metal? UPDATE: So I went ahead and tried to follow Neal's suggestion but hit a wall. I got to a Windows XP Pro install screen that had the 3 following options as well as my drive data: To set up Windows XP on the selected item, press Enter To create a partition in the unpartitioned space, press C To delete the selected partition, press D 57232 MB Disk 0 at Id 0 on bus 0 on atapi [MBR] C: Partition1 [NTFS] 10001 MB ( 4642 MB free ) Unpartitioned space 6448 MB D: Partition2 [NTFS] 40774 MB ( 26225 MB free ) Unpartitioned space 8 MB I figured I would go with the first choice ((To set up Windows XP on the selected item, press Enter)) because I just wanted to set up Windows XP on C: Partition1 (which was preselected) so I pressed Enter which brought me to a screen displaying this message: You chose to install Windows XP on a partition that contains another operating system. Installing Windows XP on this partition might cause the other operating system to function improperly. CAUTION: Installing multiple operating systems on a single partition is not recommended. So this leads me to 2 new questions: How do I get rid of the Windows XP (Polish language) install on C: Partition 1 so that I can cleanly and safely install Windows XP (English language) on it? Neal, is this what you meant by me possibly having to delete the partition that the Windows XP (Polish language) install was located on? Since I have the option to delete partitions with the 3rd choice ((To delete the selected partition, press D)), should I do that on this screen or wait until I have Windows XP (English language) safely installed on C: Partition 1? I have to ask these questions because I have read that it is possibly dangerous to delete hard drive partitions. Just being cautious.

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • PXE-E32 TFTP Open Timeout While Attempting to PXE Boot from Windows Deployment Services

    - by bschafer
    I'm running Windows Deployment Services on Windows Server 2008 R2 on top of an ESX 4.0 box. This is the only function of this VM instance, although it had previously functioned as an AD Domain Controller. My DHCP server is running on our primary Domain Controller, which is also Server 2008 R2, but running on metal. Everything was working perfectly until we recently had our backup generator fail during a power outage, causing all of our servers and networking equipment to lose power for a period of time. When we brought all of our equipment back up, everything was working as expected except for WDS. Our network is split up into several different vlans. Now, depending on which vlan the client computer is on, it's behaving differently when attempting to PXE boot into WDS. Our servers are located on the 10.55.x.x vlan, which, due to the nature of it, has no DHCP server active in it. The first computer we plugged in happened to be in the 10.99.x.x vlan, which is supposed to be reserved for network management devices (i.e. switches), but we've been using it occasionally otherwise. That computer gave us PXE-E11 ARP Timeout errors. When we moved to a different computer on the 10.19.x.x vlan (for general purpose use), it finally gets an IP from DHCP, but it presents us with a very stumping PXE-E32 TFTP Open Timeout error. Before the power outage, it didn't matter which vlan a device was on; it would PXE boot and image just fine. I've made no changes to anything server-side. Everything is configured exactly the same way it was on my WDS and DHCP servers as before the power outage. I've tried several different computers, including different models. All of this, combined with the quirky behavior depending on the vlan, makes me think something went wrong in one or more of our switches, probably because of the power outage. Unfortunately, I'm no network guy, and I know very little about how to configure our switches properly. Is this an issue with switches, etc? If so, how can I fix it? Is there some magical option I'm not aware of? Does anybody out there have any hunches? I've pretty much exhausted my ideas. Our main switch is an HP Procurve 5406. We also have 3x HP Procurve 4208 switches. The ESX Server is an HP ProLiant DL380 G6. The WDS VM is currently using the VMXNET3 network adaptor, but we've also tried the E1000 adaptor.

    Read the article

  • Windows Server 2008: specifying the default IP address when NIC has multiple addresses

    - by Cédric Boivin
    I have a Windows Server which has ~10 IP addresses statically bound. The problem is I don't know how to specify the default IP address. Sometimes when I assign a new address to the NIC, the default IP address changes with the last IP entered in the advanced IP configuration on the NIC. This has the effect (since I use NAT) that the outgoing public IP changes too. Even though this problem is currently on Windows Server 2008 How can you set the default IP address on a NIC when it has multiple IP addresses bound? There is more explication on my probleme. Here is the ipconfig DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.99.49(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.51(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.52(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.53(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.54(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.55(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.56(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.58(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.59(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.60(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.61(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.62(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.64(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.65(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.66(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.67(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.68(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.70(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.71(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.100(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.108(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.109(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.112(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.63(Duplicate) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.99.1 If i do a pathping there is the answer, the first up is the 99.49, also if my default ip is 99.100 Tracing route to www.l.google.com [72.14.204.99] over a maximum of 30 hops: 0 Machine [192.168.99.49] There is the routing table on the machine Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.99.1 192.168.99.49 261 10.10.10.0 255.255.255.0 On-link 10.10.10.10 261 10.10.10.10 255.255.255.255 On-link 10.10.10.10 261 10.10.10.255 255.255.255.255 On-link 10.10.10.10 261 192.168.99.0 255.255.255.0 On-link 192.168.99.49 261 192.168.99.49 255.255.255.255 On-link 192.168.99.49 261 192.168.99.51 255.255.255.255 On-link 192.168.99.49 261 192.168.99.52 255.255.255.255 On-link 192.168.99.49 261 192.168.99.53 255.255.255.255 On-link 192.168.99.49 261 192.168.99.54 255.255.255.255 On-link 192.168.99.49 261 192.168.99.55 255.255.255.255 On-link 192.168.99.49 261 192.168.99.56 255.255.255.255 On-link 192.168.99.49 261 192.168.99.57 255.255.255.255 On-link 192.168.99.49 261 192.168.99.58 255.255.255.255 On-link 192.168.99.49 261 192.168.99.59 255.255.255.255 On-link 192.168.99.49 261 192.168.99.60 255.255.255.255 On-link 192.168.99.49 261 192.168.99.61 255.255.255.255 On-link 192.168.99.49 261 192.168.99.62 255.255.255.255 On-link 192.168.99.49 261 192.168.99.64 255.255.255.255 On-link 192.168.99.49 261 192.168.99.65 255.255.255.255 On-link 192.168.99.49 261 192.168.99.66 255.255.255.255 On-link 192.168.99.49 261 192.168.99.67 255.255.255.255 On-link 192.168.99.49 261 192.168.99.68 255.255.255.255 On-link 192.168.99.49 261 192.168.99.70 255.255.255.255 On-link 192.168.99.49 261 192.168.99.71 255.255.255.255 On-link 192.168.99.49 261 192.168.99.100 255.255.255.255 On-link 192.168.99.49 261 192.168.99.108 255.255.255.255 On-link 192.168.99.49 261 192.168.99.109 255.255.255.255 On-link 192.168.99.49 261 192.168.99.112 255.255.255.255 On-link 192.168.99.49 261 192.168.99.255 255.255.255.255 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 10.10.10.10 261 255.255.255.255 255.255.255.255 On-link 192.168.99.49 261 255.255.255.255 255.255.255.255 On-link 10.10.10.10 261 How i can be sure the ip use in the image ( suppose to be the default ip address ) will be use by my server as the default address ?

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • iPhone 3G refuses to transfer purchased apps to iTunes

    - by andynormancx
    My iPhone 3G refuses to transfer purchased apps to iTunes. This is causing me major problems with syncing. Whenever I attempt to transfer apps from the iPhone to iTunes it goes through the motions, but never actually transfers anything. It displays the various apps in the info area at the top of the screen, but the progress bar never advances. In comparison when I sync other iPhones, using the same install of iTunes, the progress bar advances and apps are transferred. The same also happens on clean installs of iTunes on other computers, it seems to be my iPhone that is the common factor. I have tried restoring the phone from a backup, which makes no difference. This started happening months ago and the phone has since been upgraded to 3.0 and 3.1, but the problem still persists. Originally it was just a minor irritation, but I made and attempt to fix it which has made things worse. I deleted all the apps from with iTunes and then did "Transfer purchases" in the hope that it might fix something. It didn't fix anything. Also, I cannot now sync at all. If I do sync iTunes now does "transferring purchases", fails to transfer and then deletes all the apps (and data) from my iPhone. It also means I can't sync music, podcasts or anything else. I can't sync anything else, because I can't temporarily turn off app syncing because then iTunes warns that the apps on the iPhone will be deleted. I also tried de-authorising and re-authorising. What can I do to get app syncing working again ? P.S. I have considered deleting all the apps and reinstalling them one by one, in the hope that it will fix the problem. However I don't really want to embark on doing that for 55+ apps and re-entering login details etc for the apps that need them, especially as I might then find out it didn't solve the problem. Update: The latest update to iTunes 9 has improved things in one key aspect. If I let a sync run to completion iTunes no longer deletes all the apps from my phone. So I can now sync all my other data, even if I still can't sync my apps. Resolved: See my answer to the question for how I finally resolved the problem.

    Read the article

  • custom route not working on windows

    - by Michael Closson
    My windows laptop is directly connected to 192.168.1.0/24 (wireless lan). I access 10.21.0.0/16 though a router that is connected to both networks. The routing works fine with this configuration. I have a VPN, that connects to 10.0.0.0/8. The VPN network doesn't actually use any IPs in the 10.21.0.0/16 range. So I should be able to configure my routing table to route all the 10.21.0.0/16 IPs through the wireless lan, and all other 10.0.0.0/8 through the VPN. My understanding is that I can do this if the metric for the 10.21.0.0 is lower than that of the 10.0.0.0. The VPN (10.0.0.0) is automatically assigned metric 20. I have manually assigned the WLAN a metric of 1. I manually add an entry to the routing table with this command: route add 10.21.0.0 mask 255.255.0.0 192.168.1.201 metric 1 The route is then assigned a metric of 2 (which is expected). The problem is that it doesn't work. I can't ping any machine on the 10.21.0.0 network. But I can access other stuff on the 10.0.0.0. I can also access stuff on the 192.168.1.0. To debug this i've done the following. Run tcpdump on the router (192.168.1.201). I can verify that no packets for 10.21.0.0 arrive on that interface. Disable iptables on the router. Disable the windows firewall. Run wireshark on my laptop, to try and see which interface the ping requests go to. But I can't see them go anywhere!! The ping command doesn't receive any 'destination unreachable' messages. Here is the relevant section of the routing table. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.201 192.168.1.18 2 10.0.0.0 255.0.0.0 On-link 10.55.44.203 20 10.21.0.0 255.255.0.0 192.168.1.201 192.168.1.18 2

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • why i failed to build vsftp?

    - by hugemeow
    make, then failed with the following message. the main point is /lib/libcap.so.1: could not read symbols: File in wrong format, confusing... gcc -c readwrite.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c opts.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c ssl.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c sslslave.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c ptracesandbox.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c ftppolicy.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c sysutil.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c sysdeputil.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -o vsftpd main.o utility.o prelogin.o ftpcmdio.o postlogin.o privsock.o tunables.o ftpdataio.o secbuf.o ls.o postprivparent.o logging.o str.o netstr.o sysstr.o strlist.o banner.o filestr.o parseconf.o secutil.o ascii.o oneprocess.o twoprocess.o privops.o standalone.o hash.o tcpwrap.o ipaddrparse.o access.o features.o readwrite.o opts.o ssl.o sslslave.o ptracesandbox.o ftppolicy.o sysutil.o sysdeputil.o -Wl,-s `./vsf_findlibs.sh` /lib/libcap.so.1: could not read symbols: File in wrong format collect2: ld returned 1 exit status make: *** [vsftpd] Error 1 [mirror@hugemeow vsftpd]$ ls /lib/libc libc-2.5.so libcap.so.1.10 libcidn.so.1 libcom_err.so.2.1 libcrypto.so.0.9.8e libcrypt.so.1 libcap.so.1 libcidn-2.5.so libcom_err.so.2 libcrypt-2.5.so libcrypto.so.6 libc.so.6 [mirror@hugemeow vsftpd]$ ls /lib/libc libc-2.5.so libcap.so.1.10 libcidn.so.1 libcom_err.so.2.1 libcrypto.so.0.9.8e libcrypt.so.1 libcap.so.1 libcidn-2.5.so libcom_err.so.2 libcrypt-2.5.so libcrypto.so.6 libc.so.6 [mirror@hugemeow vsftpd]$ ls /lib/libcap.so.1 -l lrwxrwxrwx 1 root root 14 Mar 20 2012 /lib/libcap.so.1 -> libcap.so.1.10 [mirror@hugemeow vsftpd]$ ls /lib/libcap.so.1 -lh lrwxrwxrwx 1 root root 14 Mar 20 2012 /lib/libcap.so.1 -> libcap.so.1.10 [mirror@hugemeow vsftpd]$ ls /lib/libcap.so.1 -lhL -rwxr-xr-x 1 root root 12K Mar 15 2007 /lib/libcap.so.1 this may have something to do with 64 bit system, but i have make modification to vsf_findlibs.sh 48 # Look for libcap (capabilities) 49 if locate_library /lib64/libcap.so.1; then 50 echo "/lib/libcap.so.1"; 51 elif locate_library /lib64/libcap.so.2; then 52 echo "/lib/libcap.so.2"; 53 else 54 # locate_library /usr/lib/libcap.so && echo "-lcap"; 55 # locate_library /lib/libcap.so && echo "-lcap"; 56 locate_library /lib64/libcap.so.1 && echo "-lcap"; 57 fi but make failed with the same error, why?

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • keyboard mappings are totally screwed after updating to kde4

    - by zeonglow
    I recently upgraded from KDE 3.5 to KDE 4, and I have been having weird issues with my keyboard. In one of the virtual consoles e.g. when I press ctrl + alt 1 , I can type perfectly, but in KDE, several of the number keys don't work, the left and right arrows don't work either. When I press the right arrow key in xev I get this: KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 903459, (111,55), root:(115,836), state 0x10, keycode 114 (keysym 0x1008ff11, XF86AudioLowerVolume), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False When I press the '3' key it toggles my Bookmarks toolbar in Firefox, in xev I get this: KeyPress event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 999968, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 1000032, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False As this is deeper down, changing the type of keyboard in the KDE meun's has no effect. I'm slowly beginning to wade through the mountains of documentation about the X keyboard model, but there has to be a better way. Does anyone no what it is? Edit: 1234567890 ! after deleting the entire .kde folder. but only until I change the Keyboard settings from the "system settings" applet, then its hosed full time. Regardless of what I set the settings too. (restore to default settings doesn't) 2nd Edit: I'm using Gentoo AMD64, I was upgrading from KDE 3.5 KDE 4.2. I think I had manual settings before, although I didn't change anything. I was originally running KDE without HAL until that stop working a year or so ago. The only customisation I made was to set the multimedia keys to control Amarok. 3rd Edit $ grep xkb /var/log/Xorg.0.log (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" Xorg.0.log has this to say: (WW) AllowEmptyInput is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. (WW) Disabling Mouse1 (WW) Disabling Keyboard1 My Xorg.conf has this in it. Identifier "Keyboard1" Driver "kbd" Option "AutoRepeat" "500 30" # Specify which keyboard LEDs can be user-controlled (eg, with xset(1)) Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "gb"

    Read the article

  • Postmaster uses excessive CPU and Disk Writes

    - by wolfcastle
    using PostgreSQL 9.1.2 I'm seeing excessive CPU usage and large amounts of writes to disk from postmaster tasks. This happens even while my application is doing almost nothing (10s of inserts per MINUTE). There are a reasonable number of connections open however. I've been trying to determine what in my application is causing this. I'm pretty newb with postgresql, and haven't gotten anywhere so far. I've turned on some logging options in my config file, and looked at connections in the pg_stat_activity table, but they are all idle. Yet each connection consumes ~ 50% CPU, and is writing ~15M/s to disk (reading nothing). I'm basically using the stock postgresql.conf with very little tweaks. I'd appreciate any advice or pointers on what I can do to track this down. Here is a sample of what top/iotop is showing me: Cpu(s): 18.9%us, 14.4%sy, 0.0%ni, 53.4%id, 11.8%wa, 0.0%hi, 1.5%si, 0.0%st Mem: 32865916k total, 7263720k used, 25602196k free, 575608k buffers Swap: 16777208k total, 0k used, 16777208k free, 4464212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17057 postgres 20 0 236m 33m 13m R 45.0 0.1 73:48.78 postmaster 17188 postgres 20 0 219m 15m 11m R 42.3 0.0 61:45.57 postmaster 17963 postgres 20 0 219m 16m 11m R 42.3 0.1 27:15.01 postmaster 17084 postgres 20 0 219m 15m 11m S 41.7 0.0 63:13.64 postmaster 17964 postgres 20 0 219m 17m 12m R 41.7 0.1 27:23.28 postmaster 18688 postgres 20 0 219m 15m 11m R 41.3 0.0 63:46.81 postmaster 17088 postgres 20 0 226m 24m 12m R 41.0 0.1 64:39.63 postmaster 24767 postgres 20 0 219m 17m 12m R 41.0 0.1 24:39.24 postmaster 18660 postgres 20 0 219m 14m 9.9m S 40.7 0.0 60:51.52 postmaster 18664 postgres 20 0 218m 15m 11m S 40.7 0.0 61:39.61 postmaster 17962 postgres 20 0 222m 19m 11m S 40.3 0.1 11:48.79 postmaster 18671 postgres 20 0 219m 14m 9m S 39.4 0.0 60:53.21 postmaster 26168 postgres 20 0 219m 15m 10m S 38.4 0.0 59:04.55 postmaster Total DISK READ: 0.00 B/s | Total DISK WRITE: 195.97 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17962 be/4 postgres 0.00 B/s 14.83 M/s 0.00 % 0.25 % postgres: aggw aggw [local] idle 17084 be/4 postgres 0.00 B/s 15.53 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17963 be/4 postgres 0.00 B/s 15.00 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17188 be/4 postgres 0.00 B/s 14.80 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17964 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 18664 be/4 postgres 0.00 B/s 15.13 M/s 0.00 % 0.23 % postgres: aggw aggw [local] idle 17088 be/4 postgres 0.00 B/s 14.71 M/s 0.00 % 0.13 % postgres: aggw aggw [local] idle 18688 be/4 postgres 0.00 B/s 14.72 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 24767 be/4 postgres 0.00 B/s 14.93 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18671 be/4 postgres 0.00 B/s 16.14 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 17057 be/4 postgres 0.00 B/s 13.58 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 26168 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18660 be/4 postgres 0.00 B/s 15.85 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle

    Read the article

  • Perl missing while installing nginx on centos

    - by Ahoura Ghotbi
    I am trying to install nginx on my server, however it keeps returning "./configure: error: perl 5.6.1 or higher is required" eventhough I have perl v5.8.8!!!! I have already downloaded perl and trying to configure it using the following command : ./configure --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming here is the output : [root@fst nginx-0.8.55]# ./configure --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming checking for OS + Linux 2.6.18-308.el5 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-52) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found configuring additional modules adding module in nginx_mod_h264_streaming + ngx_http_h264_streaming_module was configured checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for zlib library ... found checking for perl + perl version: v5.8.8 built for x86_64-linux-thread-multi ./configure: error: perl 5.6.1 or higher is required

    Read the article

  • nginx connection time issue on some IPs

    - by sheldon
    I have recently shifted my server to nginx and php-fpm getting rid of apache. This has helped improves speeds of my website. Everything seems to work fine until i came across this issue, i noticed that nginx keeps throwing connection time out errors for only certain IPs. One of the IPs is my office IP, we have a backend that is accessed from our office through out the day. I use supervisord to launch 3 php-fpm processes with workers this is my typical php-fpm config pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 300 Since i have a server with 4 cores and 2 GB ram this is my nginx setup worker_processes 4; worker_rlimit_nofile 8192; events { worker_connections 1024; use epoll; multi_accept off; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 55; recursive_error_pages on; server_name_in_redirect off; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 8k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; Where am i going wrong with the config, I have tried various settings but the issue still persists. These are the errors i keep getting 2011/11/13 18:20:33 [error] 21583#0: *311683 upstream timed out (110: Connection timed out) while reading response header from upstream, client: IP, server: tastykhana.in, request: "GET url HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket:", host: "tastykhana.in", referrer: "url"

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

  • How is DNS used by individual processes?

    - by atroon
    When resolving FQDNs or machine names to IP addresses on my local network (mycompany.internal) I can use dig on the command line (linux/mac) or nslookup (windows) to query the configured server and get a response. But trying to enter the FQDN or even just the machine name in a ping command or in a web browser results in 'Unknown Host' or DNS errors. Here's a sample, this one from the Mac: mac:~ atroon$ dig server.mycompany.internal ; <<>> DiG 9.6.0-APPLE-P2 <<>> server.mycompany.internal ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5219 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.mycompany.internal. IN A ;; ANSWER SECTION: server.mycompany.internal. 1200 IN A 172.16.254.36 ;; Query time: 0 msec ;; SERVER: 172.16.254.8#53(172.16.254.8) ;; WHEN: Wed Dec 16 11:39:15 2009 ;; MSG SIZE rcvd: 55 mac:~ atroon$ ping server.mycompany.internal<br> ping: cannot resolve server.mycompany.internal: Unknown host I cannot for the life of me figure this one out. The DNS server is a SBS 2003 box which handles AD, some file/print, etc for a small company network. This issue happens to me about three times a week, and when I'm connected to the local network directly, the same switch as the server even. I can make any connection I want with IP addresses, I just can't make DNS work. Additionally, at the same time I'm experiencing this, other users are fine, which makes me think it's a problem on my Mac. But what sort of problem? How can dig send a query and get a reply, and ping say 'unknown host'? I'm posting here vs. serverfault because I think this is a local problem not a server problem...but if anyone can point me at the server, I guess we'll head down the street a domain or two.

    Read the article

  • Server 2008/Windows 7/Samba Unspecified error 80004005

    - by ancillary
    I have a Samba share on a LAN with 2008 PDC/DNS. Smb authenticates with AD and I have several Win7 Machines that can connect fine. I recently added a couple of new computers to the LAN which were imaged the same way (same software, etc.; different hardware so different drivers) as the other machines and they have the same policies set. I can not get the new machines to connect to the samba share no matter what. I am always met with either Unspecified Error 0x80004005 or Network Path not found. I've turned off the firewall; set LANMAN auth to respond to NTLM only/send LM & NTLM responses/use NTLM session security if negotiated in Local Sec Policy SEcurity Options; tried both ip and hostname to connect. SMB log shows that authentication succeeds; but then connection is immediately killed by the client. tcpdump shows nothing remarkable except that when trying to connect from the client via hostname there is an unknown packet type error: ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) Here's a couple of lines from that error: 11:18:37.964991 IP 001-client.domain.local.49372 > smb.domain.local.netbios-ssn: P 1670:2146(476) ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) [000] AA 46 96 FA D5 99 33 75 0C C4 20 CE 26 42 F3 61 \252F\226\372\325\2313u \014\304 \316&B\363a [010] F0 8C FB 65 18 17 40 A5 DB 42 BB 94 37 53 92 EC \360\214\373e\030\027@\245 \333B\273\2247S\222\354 [020] 55 98 7F C4 AE 3D 6B 10 C4 U\230\177\304\256=k\020 \304 11:18:37.964998 IP smb.domain.local.netbios-ssn > 001-client.domain.local.49372: . ack 2146 win 100 Here's smb.conf just in case (though don't see how if other machines are working fine): [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL server string = domain|smb share interfaces = eth1 security = ADS password server = 192.168.1.3 log level = 2 log file = /var/log/samba/%m.log smb ports = 139 strict locking = no load printers = No local master = No domain master = No wins server = 192.168.1.3 wins support = Yes idmap uid = 500-10000000 idmap gid = 500-10000000 winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes [samba-share1] comment = SMB Share path = /home/share/smb/ valid users = @"MYDOMAIN+Domain Users" admin users = @"MYDOMAIN+Domain Admins" guest ok = no read only = No create mask = 0765 force directory mode = 0777 Any ideas what else I could try or look for? Or what might be the problem? Thanks.

    Read the article

  • Can't save screen resolution setting.

    - by Searock
    Hi, My screen resolution in windows and previous version of Ubuntu (9.04) was 1152 x 864. But in Ubuntu 10.04 it gives me an option of 1024 x 786 and 1360 x 786. I have some how managed to add 1152x684 resolution by using xrandr command. searock@searock-desktop:~$ cvt 1152 864 1152x864 59.96 Hz (CVT 1.00M3) hsync: 53.78 kHz; pclk: 81.75 MHz Modeline "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --addmode S-video 1152x864 xrandr: cannot find output "S-video" searock@searock-desktop:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1360x768 59.8 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 59.9 1152x864_60.00 (0x124) 81.0MHz h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.3KHz v: height 864 start 867 end 871 total 897 clock 59.4Hz searock@searock-desktop:~$ xrandr --addmode VGA1 1152x864_60.00 But the problem is when ever I restart my computer I get this message. Could not apply the stored configuration for the monitors. Could not find a suitable configuration of screens. And then it comes back to 1024 x 786 My graphic card details : Intel(R) 82945G Express Chipset Family. Is there any way I can fix this once for all ? Thanks. Edit 1 : rumtscho has suggested me to modify xorg.conf file. But I am not sure what HorizSync means? is it Horizontal frequency ? My monitor model is Acer v173. Here's my specification. So what should be HorizSync and VertRefresh ? Edit 2 : I have edited my Xorg.conf file as follows : Section "Monitor" Identifier "Configured Monitor" HorizSync 30-80 VertRefresh 55-75 EndSection then I added the resolution and restarted my computer and still I am facing the same problem. Is there something that I am missing? Edit 3 : For now I have edited /etc/gdm/Init/Default(gdm startup scripts) to include following xrandr commands, just below line initctl -q emit login-session-start DISPLAY_MANAGER=gdm xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00<br/> xrandr -s 1152x864_60.00 This has solved my problem, but this commands have increased my computer's boot time. I think I will have to edit xorg file properly. Edit 4 : Instead of adding this files to gdm startup scripts I have created a shell script and added it to startup (System - Preference - Startup Applications) #!/bin/bash xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00 xrandr -s 1152x864_60.00 And don't forget to add execution rights. (Right Click - Properties - Permission - Allow executing file as program)

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >