Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 31/283 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Is there a way to reliably detect the total number of CPU cores?

    - by John Sheares
    I need a reliable way to detect how many CPU cores are on a computer. I am creating a numerically intense simulation C# application and want to create the maximum number of running threads as cores. I have tried many of the methods suggested around the internet like Environment.ProcessorCount, using WMI, this code: http://blogs.adamsoftware.net/Engine/DeterminingthenumberofphysicalCPUsonWindows.aspx. None of them seem to think a AMD X2 has two cores. Any ideas?

    Read the article

  • how to open many tabs in chromium but unload/disable inactive/notCurrent ones, releasing memory and cpu?

    - by Aquarius Power
    So I have 50 tabs opened on chromium, but that is using too much memory and some of the CPU. How can I have all those concurrent researches I am doing opened but not clog my machine? I think there should have a way that only the active tab is loaded in memory and running, and all the others should stay closed/unloaded from memory, until I want to look at them... Any extension can do something like that?

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • Drawing particles with CPU instead of GPU (XNA)

    - by Helix
    I'm trying out modifications to the following particle system. http://create.msdn.com/en-US/education/catalog/sample/particle_3d I have a function such that when I press Space, all the particles have their positions and velocities set to 0. for (int i = 0; i < particles.GetLength(0); i++) { particles[i].Position = Vector3.Zero; particles[i].Velocity = Vector3.Zero; } However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0). As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything. I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

    Read the article

  • check what process was causing the problem of high cpu load

    - by linuxk
    I'm running nginx wordpress server in KVM using 12.04 server x86. It was running very well about 4 month until 2 hours ago. I found that my website is down and no ping response. Virt-manager logged high cpu load(plz see the picture below) before unexpected shut down. I want to know what process caused unexpected shutdown. The following log files make me think my server is attacked. Any suggestions and help would be appreciated. kern.log and syslog showed me same output. Nov 11 03:54:11 www kernel: [1344541.156239] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0.0.0.0 DST=224.0. 0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Nov 11 03:54:11 www kernel: [1344541.156315] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0101:080a:2334:c90 0:0100:0000:0000:0000 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 /nginx/access.log showed me 119.235.237.17 - - [11/Nov/2012:03:45:29 +0900] "GET /blog HTTP/1.1" 200 30493 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)" my-server-ip - - [11/Nov/2012:11:05:30 +0900] "POST /wp-cron.php?doing_wp_cron=13 HTTP/1.0" 499 0 "-" "WordPress/3.4.2; http://mywebsite.com" Server turned on in here. 119.235.237.16 - - [11/Nov/2012:11:05:30 +0900] "GET /blog HTTP/1.1" 200 32935 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)"

    Read the article

  • Xna, after mouse click cpu usage goes 100%

    - by kosnkov
    Hi i have following code and it is enough just if i click on blue window then cpu goes to 100% for like at least one minute even with my i7 4 cores. I just check even with empty project and is the same !!! public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; private Texture2D cursorTex; private Vector2 cursorPos; GraphicsDevice device; float xPosition; float yPosition; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { Viewport vp = GraphicsDevice.Viewport; xPosition = vp.X + (vp.Width / 2); yPosition = vp.Y + (vp.Height / 2); device = graphics.GraphicsDevice; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); cursorTex = Content.Load<Texture2D>("strzalka"); } protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(cursorTex, cursorPos, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Identify high CPU consumed thread for Java app

    - by Vincent Ma
    Following java code to emulate busy and Idle thread and start it. import java.util.concurrent.*;import java.lang.*; public class ThreadTest {    public static void main(String[] args) {        new Thread(new Idle(), "Idle").start();        new Thread(new Busy(), "Busy").start();    }}class Idle implements Runnable {    @Override    public void run() {        try {            TimeUnit.HOURS.sleep(1);        } catch (InterruptedException e) {        }    }}class Busy implements Runnable {    @Override    public void run() {        while(true) {            "Test".matches("T.*");        }    }} Using Processor Explorer to get this busy java processor and get Thread id it cost lots of CPU see the following screenshot: Cover to 4044 to Hexadecimal is oxfcc. Using VistulVM to dump thread and get that thread. see the following screenshot In Linux you can use  top -H to get Thread information. That it! Any question let me know. Thanks

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • C++ Program Flow: Sockets in an Object and the Main Function

    - by jfm429
    I have a rather tricky problem regarding C++ program flow using sockets. Basically what I have is this: a simple command-line socket server program that listens on a socket and accepts one connection at a time. When that connection is lost it opens up for further connections. That socket communication system is contained in a class. The class is fully capable of receiving the connections and mirroring the data received to the client. However, the class uses UNIX sockets, which are not object-oriented. My problem is that in my main() function, I have one line - the one that creates an instance of that object. The object then initializes and waits. But as soon as a connection is gained, the object's initialization function returns, and when that happens, the program quits. How do I somehow wait until this object is deleted before the program quits? Summary: main() creates instance of object Object listens Connection received Object's initialization function returns main() exits (!) What I want is for main() to somehow delay until that object is finished with what it's doing (aka it will delete itself) before it quits. Any thoughts?

    Read the article

  • Sockets and COBOL

    - by kati
    I have received a job at a hospital which still uses COBOL for all organizational work, the whole (now 20 Terabyte) database (Which was a homebrew in, guess what, COBOL) is filled with the data of every patient since the last 45 (or so) years. So that was my story. Now to my question: Currently, all sockets were (from what I've seen) implemented by COBOL programs writing their data into files. These files then were read out by C++ programs (That was an additional module added in the late 1980s) and using C++ sockets sent to the database. Now this solution has stopped working as they are moving the database from COBOL to COBOL, yes - they didn't use MySQL or so - they implemented a new database - again in COBOL. I asked the guy that worked there before me (hes around 70 now) why the hell someone would do that and he told me that he is so good at COBOL that he doesn't want to write it in any other language. So far so good now my question: How can I implement socket connections in COBOL? I need to create an interface to the external COBOL database located at, for example, 192.168.1.23:283.

    Read the article

  • Fan always on and overheating problems on HP G62

    - by Dave
    I have a HP G62 with i5 processor, 4G Ram and ATI HD 5470. Started using Ubuntu 4 months ago, first was 11.10 and after 12.04. Everything worked perfectly until some weeks ago when my cpu fan became "crazy" and I mean by this always on, noise and overheating problems, for sure if I put a glass with water in some minutes will boil (and I mean it). There can be a bug in the last kernel or any other Ubuntu's official update? And by the way CPU fan is cleaned like a new born baby; Used jupiter and all those stuff (lm-sensor, psensor...) - same problem; Don't ask me to install ATI drivers because will work like sh#t (interface laggy as HD 5470 isn't supported by Ubuntu) and I can't even open ATI Catalyst (error when I try to open it), more than this I always used open source drivers and as I said, worked perfectly (3d, effects). Right now I'm back to Windows and everything is working perfectly but my main problem is that I get used with Ubuntu and kinda weird to use windows again. I want back my nice and almost perfect Ubuntu. Thanks, Dave

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • Can certain system-hungry modules be disabled in Ubuntu?

    - by Ole Thomsen Buus
    Hi, Let me add some context: I am currently using Ubuntu 9.10 64-bit (Desktop) on a relatively powerful stationary PC (Intel Core i7 920, 12GB ram). My purpose is highspeed imaging with a pointgrey Grashopper machine-vision camera (for research, PhD project). This camera is capable of 200 fps at full VGA (640x480) resolution. The camera is connected using Firewire (1394b) and the drivers and software from Pointgrey works great. I have developed a console C++ application that can grap a certain number of frames to preallocated memory and after this also save the grapped frames to harddrive. Currently it works fine but sometimes I am observing a few framedrops (1-3). When this happens I reset the experiment and repeat the recording and usually i am lucky the second time with no framedrops (the camera-driver has a internal framecounter that I am using). Question: I usually go to tty1 and use "sudo service gdm stop" to disable the graphical frontend. It seems to release some memory though that is not my main concern. My concern is CPU resources. Are there other system hungry modules that can be disabled temporarily such that the CPU gets less busy on Ubuntu 9.10? At some point in the future I will update to 10.10. Should I perhaps option for the server edition instead? Thanks.

    Read the article

  • AIX network parameters to close TCP sockets of unplugged devices

    - by ADD Geek
    Hi there We have an AIX box, running what we call in banking "ATM Switch" not the ATM networking switch, but the bank ATM driver. where we have some ATM machines connected to two server processes. The problem is, when we disconnect any of these machines, the netstat -na| grep <port number> command shows that the socket established for this disconnected device is still established, we have to manually send a command from the software to make the socket aware that it is not live anymore. Is there a parameter on tcp level to make this connection aware within a minute or two that this device is not connected anymore? we had the following parameters set with root privileges: no -o tcp_keepidle=1000 no -o tcp_keepcnt=2 no -o tcp_keepintvl=150 no -o tcp_finwait2=100 it was originally having the default values. but even after we changed these parameters and restarted the server processes, the problem was still there.

    Read the article

  • Oracle Virtual Server OEL vm fails to start - kernel panic on cpu identify

    - by Towndrunk
    I am in the process of following a guide to setup various oracle vm templates, so far I have installed OVS 2. 2 and got the OVM Manager working, imported the template for OEL5U5 and created a vm from it.. the problem comes when starting that vm. The log in the OVMM console shows the following; Update VM Status - Running Configure CPU Cap Set CPU Cap: failed:<Exception: failed:<Exception: ['xm', 'sched-credit', '-d', '32_EM11g_OVM', '-c', '0'] => Error: Domain '32_EM11g_OVM' does not exist. StackTrace: File "/opt/ovs-agent-2.3/OVSXXenVMConfig.py", line 2531, in xen_set_cpu_cap run_cmd(args=['xm', File "/opt/ovs-agent-2.3/OVSCommons.py", line 92, in run_cmd raise Exception('%s => %s' % (args, err)) The xend.log shows; [2012-11-12 16:42:01 7581] DEBUG (DevController:139) Waiting for devices vtpm [2012-11-12 16:42:01 7581] INFO (XendDomain:1180) Domain 32_EM11g_OVM (3) unpaused. [2012-11-12 16:42:03 7581] WARNING (XendDomainInfo:1907) Domain has crashed: name=32_EM11g_OVM id=3. [2012-11-12 16:42:03 7581] ERROR (XendDomainInfo:2041) VM 32_EM11g_OVM restarting too fast (Elapsed time: 11.377262 seconds). Refusing to restart to avoid loops .> [2012-11-12 16:42:03 7581] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=3 [2012-11-12 16:42:12 7581] DEBUG (XendDomainInfo:2230) Destroying device model [2012-11-12 16:42:12 7581] INFO (image:553) 32_EM11g_OVM device model terminated I have set_on_crash="preserve" in the vm.cfg and have then run xm create -c to get the console screen while booting and this is the log of what happens.. Started domain 32_EM11g_OVM (id=4) Bootdata ok (command line is ro root=LABEL=/ ) Linux version 2.6.18-194.0.0.0.3.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Mon Mar 29 18:27:00 EDT 2010 BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000180800000 (usable)> No mptable found. Built 1 zonelists. Total pages: 1574912 Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) Xen reported: 1600.008 MHz processor. Console: colour dummy device 80x25 Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) Software IO TLB disabled Memory: 6155256k/6299648k available (2514k kernel code, 135548k reserved, 1394k data, 184k init) Calibrating delay using timer specific routine.. 4006.42 BogoMIPS (lpj=8012858) Security Framework v1.0.0 initialized SELinux: Initializing. selinux_register_security: Registering secondary module capability Capability LSM initialized as secondary Mount-cache hash table entries: 256 CPU: L1 I Cache: 64K (64 bytes/line), D cache 16K (64 bytes/line) CPU: L2 Cache: 2048K (64 bytes/line) general protection fault: 0000 [1] SMP last sysfs file: CPU 0 Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.18-194.0.0.0.3.el5xen #1 RIP: e030:[ffffffff80271280] [ffffffff80271280] identify_cpu+0x210/0x494 RSP: e02b:ffffffff80643f70 EFLAGS: 00010212 RAX: 0040401000810008 RBX: 0000000000000000 RCX: 00000000c001001f RDX: 0000000000404010 RSI: 0000000000000001 RDI: 0000000000000005 RBP: ffffffff8063e980 R08: 0000000000000025 R09: ffff8800019d1000 R10: 0000000000000026 R11: ffff88000102c400 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffffffff805d2000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 Process swapper (pid: 0, threadinfo ffffffff80642000, task ffffffff804f4b80) Stack: 0000000000000000 ffffffff802d09bb ffffffff804f4b80 0000000000000000 0000000021100800 0000000000000000 0000000000000000 ffffffff8064cb00 0000000000000000 0000000000000000 Call Trace: [ffffffff802d09bb] kmem_cache_zalloc+0x62/0x80 [ffffffff8064cb00] start_kernel+0x210/0x224 [ffffffff8064c1e5] _sinittext+0x1e5/0x1eb Code: 0f 30 b8 73 00 00 00 f0 0f ab 45 08 e9 f0 00 00 00 48 89 ef RIP [ffffffff80271280] identify_cpu+0x210/0x494 RSP ffffffff80643f70 0 Kernel panic - not syncing: Fatal exception clear as mud to me. are there any other logs that will help me? I have now deployed another vm from the same template and used the default vm settings rather than adding more memory etc - I get exactly the same error.

    Read the article

  • Transparent proxying leaves sockets with SYN_RCVD in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

  • connection to apache server switches sockets connection

    - by Newben
    I have just post a question but I post an other one because the problem is not the one I had in thought when asking the latter. So, I am running some rails app on osx, when I run rails s, everything works fine. If I shut down the apache server (mamp) and if I run rails s again, I have this message Can't connect to local MySQL server through socket '/Applications/MAMP/tmp/mysql/mysql.sock', which for sure is normal. For info, my mamp server is running, and the connection must pass through /Applications/MAMP/Library/bin/mysql, so I aliased it by setting in my bash profile : alias mysql="/Applications/MAMP/Library/bin/mysql" Now, when I launch a rails generate command type, I get this message : /$root/vendor/bundle/ruby/2.0.0/gems/mysql2-0.3.11/lib/mysql2/client.rb:44:in `connect': Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (Mysql2::Error) So how it can be ?

    Read the article

  • WinAPI window taking 50% of CPU when idle

    - by henryprescott
    I'm currently working on a game that creates a window using WindowsAPI. However, at the moment the process is taking up 50% of my CPU. All I am doing is creating the window and looping using the code found below: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { MSG message = {0}; WNDCLASSEX wcl = {0}; wcl.cbSize = sizeof(wcl); wcl.style = CS_OWNDC | CS_HREDRAW | CS_VREDRAW; wcl.lpfnWndProc = WindowProc; wcl.cbClsExtra = 0; wcl.cbWndExtra = 0; wcl.hInstance = hInstance = hInstance; wcl.hIcon = LoadIcon(0, IDI_APPLICATION); wcl.hCursor = LoadCursor(0, IDC_ARROW); wcl.hbrBackground = 0; wcl.lpszMenuName = 0; wcl.lpszClassName = "GL2WindowClass"; wcl.hIconSm = 0; if (!RegisterClassEx(&wcl)) return 0; hWnd = CreateAppWindow(wcl, "Application"); if (hWnd) { if (Init()) { ShowWindow(hWnd, nShowCmd); UpdateWindow(hWnd); while (true) { while (PeekMessage(&message, 0, 0, 0, PM_REMOVE)) { if (message.message == WM_QUIT) break; TranslateMessage(&message); DispatchMessage(&message); } if (message.message == WM_QUIT) break; if (hasFocus) { elapsedTime = GetElapsedTimeInSeconds(); lastEarth += elapsedTime; lastUpdate += elapsedTime; lastFrame += elapsedTime; lastParticle += elapsedTime; if(lastUpdate >= (1.0f / 100.0f)) { Update(lastUpdate); lastUpdate = 0; } if(lastFrame >= (1.0f / 60.0f)) { UpdateFrameRate(lastFrame); lastFrame = 0; Render(); SwapBuffers(hDC); } if(lastEarth >= (1.0f / 10.0f)) { UpdateEarthAnimation(); lastEarth = 0; } if(lastParticle >= (1.0f / 30.0f)) { particleManager->rightBooster->Update(); particleManager->rightBoosterSmoke->Update(); particleManager->leftBooster->Update(); particleManager->leftBoosterSmoke->Update(); particleManager->breakUp->Update(); lastParticle = 0; } } else { WaitMessage(); } } } Cleanup(); UnregisterClass(wcl.lpszClassName, hInstance); } return static_cast<int>(message.wParam); } So even when I am not drawing anything when the window has focus it still takes up 50%. I don't understand how this is taking up so much system resources. Am I doing something wrong? Any help would be much appreciated, thank you!

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • Do programmable Ethernet devices (think onboard CPU) really exist?

    - by PeterM
    I've heard from various people that programmable Ethernet cards exist and are easily available. However I have yet to be able to track down one of these mythical devices so I'm wondering if they're just that - a myth. Such a programmable card has a gigabit Ethernet interface, has a programmable CPU and connects to the host system via PCI Express. The problem area these cards address are low latency network applications where the card itself does the work and "reports back" to the operating system. Basically the card acts as a co-processor and handles all the low latency requirements on the card, thus avoiding the issues of writing low latency code in user-land - think 0.4ms - 0.5ms response times. So my question is, do these cards really exist and if so, where can I get my hands on one?

    Read the article

  • Does a CPU assigns a value atomically to memory?

    - by Poni
    Hi! A quick question I've been wondering about for some time; Does the CPU assign values atomically, or, is it bit by bit (say for example a 32bit integer). If it's bit by bit, could another thread accessing this exact location get a "part" of the to-be-assigned value? Think of this: I have two threads and one shared "unsigned int" variable (call it "g_uiVal"). Both threads loop. On is printing "g_uiVal" with printf("%u\n", g_uiVal). The second just increase this number. Will the printing thread ever print something that is totally not or part of "g_uiVal"'s value? In code: unsigned int g_uiVal; void thread_writer() { g_uiVal++; } void thread_reader() { while(1) printf("%u\n", g_uiVal); }

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >