Search Results

Search found 11380 results on 456 pages for 'cpu speed'.

Page 122/456 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Passing parameters to web service method in c#

    - by wafa.cs1
    Hello I'm developing an android application .. which will send location data to a web service to store in the server database. In Java: I've used REST protocol so the URI is: HttpPost request = new HttpPost("http://trafficmapsa.com/GService.asmx/GPSdata? lon="+Lon+"&Lat="+Lat+"&speed="+speed); In Asp.net (c#) web service will be: [WebMethod] public CountryName GPSdata(Double Lon, Double Lat, Double speed) { after passing the data from android ..nothing return in back .. and there is no data in the database!! Is there any library missing in the CS file!! I cannot figure out what is the problem.

    Read the article

  • AES acceleration for Java

    - by chris_l
    I want to encrypt/decrypt lots of small (2-10kB) pieces of data. The performance is ok for now: On a Core2Duo, I get about 90 MBytes/s AES256 (when using 2 threads). But I may need to improve that in the future - or at least reduce the impact on the CPU. Is it possible to use dedicated AES encryption hardware with Java (using JCE, or maybe a different API)? Would Java take advantage of special CPU features (SSE5?!), if I get a better CPU? Or are there faster JCE providers? (I tried SunJCE and BouncyCastle - no big difference.) Other possiblilities?

    Read the article

  • C# animation - move object from A to B or by angle

    - by Nullstr1ng
    Hi am just doing a little animation which moves an object from point a to point b or by angle/radians. what I currently have is this Point CalcMove(Point pt, double angle, int speed) { Point ret = pt; ret.X = (int)(ret.X + speed * Math.Sin(DegToRad(angle))); ret.Y = (int)(ret.Y + speed * Math.Cos(DegToRad(angle))); return ret; } but it doesn't look what i expected. please help? update: oh and am using NETCF

    Read the article

  • How to prevent duplicates, macro or something?

    - by blez
    Well, the problem is that I've got a lot of code like this for each event passed to the GUI, how can I shortify this? Macros wont do the work I guess. Is there a more generic way to do something like a 'template' ? private delegate void DownloadProgressDelegate(object sender, DownloaderProgressArgs e); void DownloadProgress(object sender, DownloaderProgressArgs e) { if (this.InvokeRequired) { this.BeginInvoke(new DownloadProgressDelegate(DownloadProgress), new object[] { sender, e }); return; } label2.Text = d.speedOutput.ToString(); } private delegate void DownloadSpeedDelegate(object sender, DownloaderProgressArgs e); void DownloadSpeed(object sender, DownloaderProgressArgs e) { if (this.InvokeRequired) { this.BeginInvoke(new DownloadSpeedDelegate(DownloadSpeed), new object[] { sender, e }); return; } string speed = ""; speed = (e.DownloadSpeed / 1024).ToString() + "kb/s"; label3.Text = speed; }

    Read the article

  • How to printf a time_t variable as a floating point number?

    - by soneangel
    Hi guys, I'm using a time_t variable in C (openMP enviroment) to keep cpu execution time...I define a float value sum_tot_time to sum time for all cpu's...I mean sum_tot_time is the sum of cpu's time_t values. The problem is that printing the value sum_tot_time it appear as an integer or long, by the way without its decimal part! I tried in these ways: to printf sum_tot_time as a double being a double value to printf sum_tot_time as float being a float value to printf sum_tot_time as double being a time_t value to printf sum_tot_time as float being a time_t value Please help me!!

    Read the article

  • Google App Engine - Is this just a fluke, or could changing the version of an app improve cold-start

    - by Spines
    Here is the situation: I had an app with a cold start time of about 4 seconds. I was trying to improve the cold start time by removing a bunch of libraries and code I didn't really need. After doing that the cold start time was about 3 seconds latency, and 3 seconds CPU time used. I changed the version number in appengine-web.xml, and nothing else. And now I have two versions of my app that have the exact same code, up and running. For cold starts, the newer version uses 1800ms to 1900ms in CPU time, and has 1800ms to 2300ms in latency. For cold starts, the older version uses 2800ms to 3000ms in CPU time, and has 2300ms to 3600ms in latency. So far I have sampled 4 cold starts for each version.

    Read the article

  • my javascript code is working in internet explorer but not working in mozilla

    - by goutham
    function buildMenu() { speed=35; topdistance=100; items=6; y=-50; ob=1; if (navigator.appName == "Netscape") { v=".top=",dS="document.",sD=""; } else { v=".pixelTop=",dS="",sD=".style"; } } function scrollItems() { if (ob<items+1) { objectX="object"+ob; y+=10; eval(dS + objectX + sD + v + y); if (y<topdistance) setTimeout("scrollItems()",speed); else y=-50, topdistance+=40, ob+=1, setTimeout("scrollItems()",speed); } }

    Read the article

  • Is code clearness killing application performance?

    - by Jorge Córdoba
    As today's code is getting more complex by the minute, code needs to be designed to be maintainable - meaning easy to read, and easy to understand. That being said, I can't help but remember the programs that ran a couple of years ago such as Winamp or some games in which you needed a high performance program because your 486 100 Mhz wouldn't play mp3s with that beautiful mp3 player which consumed all of your CPU cycles. Now I run Media Player (or whatever), start playing an mp3 and it eats up a 25-30% of one of my four cores. Come on!! If a 486 can do it, how can the playback take up so much processor to do the same? I'm a developer myself, and I always used to advise: keep your code simple, don't prematurely optimize for performance. It seems that we've gone from "trying to get it to use the least amount of CPU as possible" to "if it doesn't take too much CPU is all right". So, do you think we are killing performance by ignoring optimizations?

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Definition of variables/fields type within a constructor, how is it done?

    - by elementz
    I just had a look at Suns Java tutorial, and found something that totally confused me: Given the following example: public Bicycle(int startCadence, int startSpeed, int startGear) { gear = startGear; cadence = startCadence; speed = startSpeed; } Why is it, that the types of the variables (fields?) gear, cadence and speed do not need to be defined? I would have written it as follows: public Bicycle(int startCadence, int startSpeed, int startGear) { int gear = startGear; int cadence = startCadence; int speed = startSpeed; } What would be the actual differnce?

    Read the article

  • Not able to kill bad kernel running on NVIDIA GPU

    - by arvindkgs
    Hi, I am in a real fix. Please help. Its urgent. I have a host process that spawns multiple host(CPU) threads. These threads in turn call the CUDA kernel. These CUDA kernels are written by external users. So it might be bad kernels that enter infinite loop. In order to overcome this I have put a time-out of 2 mins that will kill the corresponding CPU thread. Will killing the CPU thread also kill the kernel running on the GPU? As far as what I have tested it does'nt. How can I also kill all the threads currently running in the GPU? Thanks, Arvind

    Read the article

  • malloc hangs in Linux

    - by Rahul
    I am using Linux on a 16 G with 2 quad core CPU. There are 8 processes which are doing some work (CPU intensive/network i/o). Out of which 4 have a memory leak (These are test conditions so no problem in having leaks here). Total space is occupied by all processes is around 15.4 G only 200 MB is free in system. Things are fine for some hours. But after that malloc hangs (for a process which doesn't have a memory leak). Its stuck for for more than 4 minutes (Note CPU is not 100% but io has gone up signficantly). Now there is no problem in the hanged process. (It has not corrupted the memory). What malloc is doing? (is it tryibg to defragment or builidng up swap space).I am using SUSE 10. Any pointers?

    Read the article

  • Using Python read specific (column, row) values from a txt file

    - by user2955708
    I would like to read values from a text file until that is a float value. Lets say I have the following file: 14:53:55 Load 300 Speed 200 Distance,m Fz Fx Speed 0.0000 249 4 6.22 0.0002 247 33 16.29 0.0004 246 49 21.02 0.2492 240 115 26.97 0.2494 241 112 21.78 0.2496 240 109 13.09 0.2498 169 79 0.27 Distance,m Fz Fx Speed 0.0000 249 4 7.22 0.0002 247 33 1.29 0.0004 246 49 271.02 0.2492 240 115 26.97 0.2494 241 112 215.78 0.2496 240 109 13.09 0.2498 169 79 0.27 And I need only the values under the first Distance column. So something like skip the first few rows, then read values from the first column while it is float. Thanks your help in advance

    Read the article

  • C++/g++: Concurrent programm

    - by phimuemue
    Hi, I got a C++ program (source) that is said to work in parallel. However, if I compile it (I am using Ubuntu 10.04 and g++ 4.4.3) with g++ and run it, one of my two CPU cores gets full load while the other is doing "nothing". So I spoke to the one who gave me the program. I was told that I had to set specific flags for g++ in order to get the program compiled for 2 CPU cores. However, if I look at the code I'm not able to find any lines that point to parallelism. So I have two questions: Are there any C++-intrinsics for multithreaded applications, i.e. is it possible to write parallel code without any extra libraries (because I did not find any non-standard libraries included)? Is it true that there are indeed flags for g++ that tell the compiler to compile the program for 2 CPU cores and to compile it so it runs in parallel (and if: what are they)?

    Read the article

  • Fastest method in merging of the two: dicts vs lists

    - by tipu
    I'm doing some indexing and memory is sufficient but CPU isn't. So I have one huge dictionary and then a smaller dictionary I'm merging into the bigger one: big_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1}} smaller_dict = {"the" : {"6" : 1, "7" : 1}} #after merging resulting_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1, "6" : 1, "7" : 1}} My question is for the values in both dicts, should I use a dict (as displayed above) or list (as displayed below) when my priority is to use as much memory as possible to gain the most out of my CPU? For clarification, using a list would look like: big_dict = {"the" : [1, 2, 3, 4, 5]} smaller_dict = {"the" : [6,7]} #after merging resulting_dict = {"the" : [1, 2, 3, 4, 5, 6, 7]} Side note: The reason I'm using a dict nested into a dict rather than a set nested in a dict is because JSON won't let me do json.dumps because a set isn't key/value pairs, it's (as far as the JSON library is concerned) {"a", "series", "of", "keys"} Also, after choosing between using dict to a list, how would I go about implementing the most efficient, in terms of CPU, method of merging them? I appreciate the help.

    Read the article

  • Secure way to run other people code (sandbox) on my server?

    - by amikazmi
    I want to make a web service that run other people code locally... Naturally, I want to limit their code access to certain "sandbox" directory, and that they wont be able to connect to other parts of my server (DB, main webserver, etc) Whats the best way to do it? Run VMware/Virtualbox: (+) I guess it's as secure as it gets.. even if someone manage to "hack".. they only hack the guest machine (+) can limit the cpu & memory the process uses (+) easy to setup.. just create the VM (-) harder to "connect" the sandbox directory from the host to the guest (-) wasting extra memory and cpu for managing the VM Run underprivileged user: (+) doesnt waste extra resources (+) sandbox directory is just a plain directory (?) cant limit cpu and memory? (?) dont know if it's secure enough... Any other way? Server running Fedora Core 8, the "other" codes written in Java & C++

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • Sphere entering in to the cube.unity

    - by Parthi
    I am trying Roll a Ball unity tutorial.Everything is fine,but when I roll the ball it is moving through the cube instead of picking it. my player class is using UnityEngine; using System.Collections; public class player : MonoBehaviour { public float speed; // Use this for initialization // Update is called once per frame void Update () { float h = Input.GetAxis("Horizontal"); float v = Input.GetAxis("Vertical"); Vector3 move = new Vector3(h,0,v); rigidbody.AddForce(move * speed * Time.deltaTime); } void OnTriggerEnter(Collider other) { if(other.gameObject.tag == "Pick up") { other.gameObject.SetActive(false); } } }

    Read the article

  • SQL SERVER – Out of the Box – Activty and Performance Reports from SSSMS

    - by pinaldave
    SQL Server management Studio 2008 is wonderful tool and has many different features. Many times, an average user does not use them as they are not aware about these features. Today, we will learn one such feature. SSMS comes with many inbuilt performance and activity reports, but we do not use it to the full potential. Let us see how we can access these standard reports. Connect to SQL Server Node >> Right Click on it >> Go to Reports >> Click on Standard Reports >> Pick Any Report. Click to Enlarge You can see there are many reports, which an average users needs right away, are available there. Let me list all the reports available. Server Dashboard Configuration Changes History Schema Changes History Scheduler Health Memory Consumption Activity – All Blocking Transactions Activity – All Cursors Activity – All Sessions Activity – Top Sessions Activity – Dormant Sessions Activity -  Top Connections Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Performance – Batch Execution Statistics Performance – Object Execution Statistics Performance – Top Queries by Average CPU Time Performance – Top Queries by Average IO Performance – Top Queries by Total CPU Time Performance – Top Queries by Total IO Service Broker Statistics Transactions Log Shipping Status In fact, when you look at the above list, it is fairly clear that they are very thought out and commonly needed reports that are available in SQL Server 2008. Let us run a couple of reports and observe their result. Performance – Top Queries by Total CPU Time Click to Enlarge Memory Consumption Click to Enlarge There are options for custom reports as well, which we can configure. We will learn about them in some other post. Additionally, you can right click on the reports and export in Excel or PDF. I think this tool can really help those who are just looking for some quick details. Does any of you use this feature, or this feature has some limitations and You would like to see more features? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Approximating walking physics via simpler sliding physics

    - by Dave
    I am modeling walking insects. I implement them as cuboids and use forces (including friction and drag), to control motion. However, the movement characteristics of this 'sliding box' physics don't match those due to a legged creature. For example, legged creatures near-instantly accelerate to their top speed; whereas applying a force to a box takes time to accelerate it. The applied force can be increased along with the counteracting drag, giving much quicker acceleration (via force) to a max speed (via drag). However, this also means the force that creatures can exert when pushing on other objects is increased. Does anyone know of any techniques using a physics engine to cheaply model walking creatures?

    Read the article

  • Internet Protocol Suite: Transition Control Protocol (TCP) vs. User Datagram Protocol (UDP)

    How do we communicate over the Internet?  How is data transferred from one machine to another? These types of act ivies can only be done by using one of two Internet protocols currently. The collection of Internet Protocol consists of the Transition Control Protocol (TCP) and the User Datagram Protocol (UDP).  Both protocols are used to send data between two network end points, however they both have very distinct ways of transporting data from one endpoint to another. If transmission speed and reliability is the primary concern when trying to transfer data between two network endpoints then TCP is the proper choice. When a device attempts to send data to another endpoint using TCP it creates a direct connection between both devices until the transmission has completed. The direct connection between both devices ensures the reliability of the transmission due to the fact that no intermediate devices are needed to transfer the data. Due to the fact that both devices have to continuously poll the connection until transmission has completed increases the resources needed to perform the transmission. An example of this type of direct communication can be seen when a teacher tells a students to do their homework. The teacher is talking directly to the students in order to communicate that the homework needs to be done.  Students can then ask questions about the assignment to ensure that they have received the proper instructions for the assignment. UDP is a less resource intensive approach to sending data between to network endpoints. When a device uses UDP to send data across a network, the data is broken up and repackaged with the destination address. The sending device then releases the data packages to the network, but cannot ensure when or if the receiving device will actually get the data.  The sending device depends on other devices on the network to forward the data packages to the destination devices in order to complete the transmission. As you can tell this type of transmission is less resource intensive because not connection polling is needed,  but should not be used for transmitting data with speed or reliability requirements. This is due to the fact that the sending device can not ensure that the transmission is received.  An example of this type of communication can be seen when a teacher tells a student that they would like to speak with their parents. The teacher is relying on the student to complete the transmission to the parents, and the teacher has no guarantee that the student will actually inform the parents about the request. Both TCP and UPD are invaluable when attempting to send data across a network, but depending on the situation one protocol may be better than the other. Before deciding on which protocol to use an evaluation for transmission speed, reliability, latency, and overhead must be completed in order to define the best protocol for the situation.  

    Read the article

  • ISO Files to USB &ndash; The Cheap and Easy Way

    - by RonGarlit
    (DISCLAIMER: Yes there are lots of more elegant ISO software beside the free Microsoft one I’m about to show. But free is free and it has been tested and works for me for making advance bootable USB drives. That is another story. Look up Windows 8 Developer Preview for that one on BING.) For those of use that work with new technology all the time we accumulate a lot of ISO files and have to burn them to CD/DVD’s quite often. But we now have machines without burner in the corporate environment. We have personally Netbooks and light wait highly mobile laptops that do not have DVD burner. USB ports are all the rage and now we have USB 3.0 which is way faster than the 2.0 we are used to. Just looking at the technology, space saving and the cost issues alone is a reason to buy these answer to the DVD’s. So what is special about USB 2.0 and USB 3.0? USB 2 has a maximum speed of 480 Mbps... (That is Megabits per SECOND!!) Now look at the storage that we have with USB thumb drives that are now up to 64 GB in size, cell phone and PDAs that have a lots of internal storage built in well above the 16 Gig range. At the MAX USB 2.0 speed of 480 Mbps a full transfer of data in between devices can take a long time. Time is money right. Every back up a iPhone? Don’t get me started. So at least the engineers have been planning ahead with USB 3.0 which offers a maximum transfer speed of 4.8 Gbps... (That is Giga bits per SECOND!!) That speed is almost 10 times faster than USB 2.0 …. We don’t need to do the math on that one do we? But for now I'm thrilled with USB 2.0 and the fact I can get these little 4 Gig USB drives for $4.00 each at Staples on sale. Well that is a no brainer don’t you think. But what can you do with them to replace that DVD. Simply and cheaply put………. THIS! First let’s get an ISO file like the Visual Studio 2010 Ultimate DVD ISO from MSDN to demonstrate with. I develop on several computers so this is a good choice for me. So we downloaded the ISO file and put it in a folder somewhere like this. Next we go download to the Windows 7 USB/DVD Download Tool site and read about the tool. http://www.microsoftstore.com/store/msstore/html/pbPage.Help_Win7_usbdvd_dwnTool And click this like to get the tool and install it. Once it is installed you go to the Start, Programs menu, Windows 7 USB DVD Download Tool folder. And then click the tool to open it up. As you will see it is a sweet, simple tool that was originally designed to put the ISO for Windows 7 which is designed to be bootable on a USB or DVD for us geeks to play with. It is now being used for the Windows 8 Developer Preview by many developers for that for the same purpose it was built for in the past. But for now we will use it to put a NON Bootable ISO on a USB. Hey it does the job and I’m reusing a left over program. Why buy the fancy one or a free trial and clutter up my machine. We will click the BROWSE button and navigate to where we put our ISO file we want to put on the USB drive. Obviously we are going to click NEXT and continue to select a USB Device (you can guess what the DVD button is for). Next we select the USB that we have plugged into one of our laptops USB ports. Then we click the BEGIN COPYING button and the first thing the program does is format our USB drive. Then it starts copying out files out of the ISO and constructing the USB as if it was a DVD. So now that the files are copying to the drive I’m going to warn you. We will error out here. This program was design for bootable ISO’s of which this one is NOT. No problem because what fails it the writing of the bootable data to the drive that isn’t there. No biggie…. Forget the STARTOVER button is even there and click the dialog’s CLOSE button and exit the program. Now go to Windows Explorer and navigate to the USB Device. You can now access everything and even add stuff to the drive. But for me I want to keep this drive for one purpose and that is to install VS2010 on various machines. So the only stuff I’ll add to this is a folder of notes on things on visual studio that I might want to put on other machines I’m installing VS2010 on to. So that is it. Have a nice day! The Ron

    Read the article

  • ~/.xsession-errors is 2.7gb big (and growing), on fresh install, caused by gnome-settings-daemon errors

    - by Alex Black
    I've just installed Ubuntu 10.10 x64, activated the recommended Nvidia drivers, and I noticed my hard disk space is disappearing, I narrowed the culprit down to this: alex@alex-home:~$ ls -la .x* -rw------- 1 alex alex 4436076400 2010-11-19 22:35 .xsession-errors -rw------- 1 alex alex 10495 2010-11-19 21:46 .xsession-errors.old Any idea what this file is, why its so big, and why its growing? A few seconds later: alex@alex-home:~$ ls -la .x* -rw------- 1 alex alex 5143604317 2010-11-19 22:36 .xsession-errors -rw------- 1 alex alex 10495 2010-11-19 21:46 .xsession-errors.old tailing it: alex@alex-home:~$ tail .xsession-errors (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed Also, the process "gnome-settings" seems to be using 100% cpu: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1514 alex 20 0 268m 10m 7044 R 100 0.1 7:06.10 gnome-settings-

    Read the article

  • What is causing the unusual high load average and IOwait?

    - by James
    I noticed on Tuesday night of last week, the load average went up sharply and it seemed abnormal since the traffic is small. Usually, the numbers usually average around .40 or lower and my server stuff (mysql, php and apache) are optimized. I noticed that the IOWait is unusually high even though the processes is barely using any CPU. top - 01:44:39 up 1 day, 21:13, 1 user, load average: 1.41, 1.09, 0.86 Tasks: 60 total, 1 running, 59 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 0.0%sy, 0.0%ni, 91.5%id, 8.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048576k total, 331944k used, 716632k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2468 1376 1140 S 0 0.1 0:00.92 init 1656 root 15 0 13652 5212 664 S 0 0.5 0:00.00 apache2 9323 root 18 0 13652 5212 664 S 0 0.5 0:00.00 apache2 10079 root 18 0 3972 1248 972 S 0 0.1 0:00.00 su 10080 root 15 0 4612 1956 1448 S 0 0.2 0:00.01 bash 11298 root 15 0 13652 5212 664 S 0 0.5 0:00.00 apache2 11778 chikorit 15 0 2344 1092 884 S 0 0.1 0:00.05 top 15384 root 18 0 17544 13m 1568 S 0 1.3 0:02.28 miniserv.pl 15585 root 15 0 8280 2736 2168 S 0 0.3 0:00.02 sshd 15608 chikorit 15 0 8280 1436 860 S 0 0.1 0:00.02 sshd Here is the VMStat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 768644 0 0 0 0 14 23 0 10 1 0 99 0 IOStat - Nothing unusal Total DISK READ: 67.13 K/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 19496 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19501 be/4 mysql 3.95 K/s 0.00 B/s 0.00 % 0.00 % mysqld 19568 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19569 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19570 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19571 be/4 chikorit 7.90 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19573 be/4 chikorit 7.90 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 11778 be/4 chikorit 0.00 B/s 0.00 B/s 0.00 % 0.00 % top 19470 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld Load Average Chart - http://i.stack.imgur.com/kYsD0.png I want to be sure if this is not a MySQL problem before making sure. Also, this is a Ubuntu 10.04 LTS Server on OpenVZ. Edit: This will probably give a good picture on the IO Wait top - 22:12:22 up 17:41, 1 user, load average: 1.10, 1.09, 0.93 Tasks: 33 total, 1 running, 32 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 89.0%id, 10.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048576k total, 260708k used, 787868k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2468 1376 1140 S 0 0.1 0:00.88 init 5849 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 8063 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 9732 root 16 0 8280 2728 2168 S 0 0.3 0:00.02 sshd 9746 chikorit 18 0 8412 1444 864 S 0 0.1 0:01.10 sshd 9747 chikorit 18 0 4576 1960 1488 S 0 0.2 0:00.24 bash 13706 chikorit 15 0 2344 1088 884 R 0 0.1 0:00.03 top 15745 chikorit 15 0 12968 5108 1280 S 0 0.5 0:00.00 apache2 15751 chikorit 15 0 72184 25m 18m S 0 2.5 0:00.37 php5-fpm 15790 chikorit 18 0 12472 4640 1192 S 0 0.4 0:00.00 apache2 15797 chikorit 15 0 72888 23m 16m S 0 2.3 0:00.06 php5-fpm 16038 root 15 0 67772 2848 592 D 0 0.3 0:00.00 php5-fpm 16309 syslog 18 0 24084 1316 992 S 0 0.1 0:00.07 rsyslogd 16316 root 15 0 5472 908 500 S 0 0.1 0:00.00 sshd 16326 root 15 0 2304 908 712 S 0 0.1 0:00.02 cron 17464 root 15 0 10252 7560 856 D 0 0.7 0:01.88 psad 17466 root 18 0 1684 276 208 S 0 0.0 0:00.31 psadwatchd 17559 root 18 0 11444 2020 732 S 0 0.2 0:00.47 sendmail-mta 17688 root 15 0 10252 5388 1136 S 0 0.5 0:03.81 python 17752 teamspea 19 0 44648 7308 4676 S 0 0.7 1:09.70 ts3server_linux 18098 root 15 0 12336 6380 3032 S 0 0.6 0:00.47 apache2 18099 chikorit 18 0 10368 2536 464 S 0 0.2 0:00.00 apache2 18120 ntp 15 0 4336 1316 984 S 0 0.1 0:00.87 ntpd 18379 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 18387 mysql 15 0 62796 36m 5864 S 0 3.6 1:43.26 mysqld 19584 root 15 0 12336 4028 668 S 0 0.4 0:00.02 apache2 22498 root 16 0 12336 4028 668 S 0 0.4 0:00.00 apache2 24260 root 15 0 67772 3612 1356 S 0 0.3 0:00.22 php5-fpm 27712 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 27730 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 30343 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 30366 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2

    Read the article

  • Solaris 11 pkg fix is my new friend

    - by user12611829
    While putting together some examples of the Solaris 11 Automated Installer (AI), I managed to really mess up my system, to the point where AI was completely unusable. This was my fault as a combination of unfortunate incidents left some remnants that were causing problems, so I tried to clean things up. Unsuccessfully. Perhaps that was a bad idea (OK, it was a terrible idea), but this is Solaris 11 and there are a few more tricks in the sysadmin toolbox. Here's what I did. # rm -rf /install/* # rm -rf /var/ai # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 So far so good. Then comes an oops..... setup-service[168]: cd: /var/ai//service/.conf-templ: [No such file or directory] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is where you generally say a few things to yourself, and then promise to quit deleting configuration files and directories when you don't know what you are doing. Then you recall that the new Solaris 11 packaging system has some ability to correct common mistakes (like the one I just made). Let's give it a try. # pkg fix installadm Verifying: pkg://solaris/install/installadm ERROR dir: var/ai Group: 'root (0)' should be 'sys (3)' dir: var/ai/ai-webserver Missing: directory does not exist dir: var/ai/ai-webserver/compatibility-configuration Missing: directory does not exist dir: var/ai/ai-webserver/conf.d Missing: directory does not exist dir: var/ai/image-server Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/cgi-bin Missing: directory does not exist dir: var/ai/image-server/images Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/logs Missing: directory does not exist dir: var/ai/profile Missing: directory does not exist dir: var/ai/service Group: 'root (0)' should be 'sys (3)' dir: var/ai/service/.conf-templ Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_data Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_files Missing: directory does not exist file: var/ai/ai-webserver/ai-httpd-templ.conf Missing: regular file does not exist file: var/ai/service/.conf-templ/AI.db Missing: regular file does not exist file: var/ai/image-server/cgi-bin/cgi_get_manifest.py Missing: regular file does not exist Created ZFS snapshot: 2012-12-11-21:09:53 Repairing: pkg://solaris/install/installadm Creating Plan (Evaluating mediators): | DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 3/3 0.0/0.0 0B/s PHASE ITEMS Updating modified actions 16/16 Updating image state Done Creating fast lookup database Done In just a few moments, IPS found the missing files and incorrect ownerships/permissions. Instead of reinstalling the system, or falling back to an earlier Live Upgrade boot environment, I was able to create my AI services and now all is well. # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 Refreshing install services Warning: mDNS registry of service solaris11-x86 could not be verified. Creating default-i386 alias Setting the default PXE bootfile(s) in the local DHCP configuration to: bios clients (arch 00:00): default-i386/boot/grub/pxegrub Refreshing install services Warning: mDNS registry of service default-i386 could not be verified. # installadm create-service -n solaris11u1-x86 --imagepath /install/solaris11u1-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 514/514 292.3/292.3 0B/s PHASE ITEMS Installing new actions 661/661 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11u1-x86 Image path: /install/solaris11u1-x86 Refreshing install services Warning: mDNS registry of service solaris11u1-x86 could not be verified. # installadm list Service Name Alias Of Status Arch Image Path ------------ -------- ------ ---- ---------- default-i386 solaris11-x86 on i386 /install/solaris11-x86 solaris11-x86 - on i386 /install/solaris11-x86 solaris11u1-x86 - on i386 /install/solaris11u1-x86 This is way way better than pkgchk -f in Solaris 10. I'm really beginning to like this new IPS packaging system.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >