Search Results

Search found 24568 results on 983 pages for 'high load'.

Page 22/983 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Ubuntu 13.10 unity won't load from this morning

    - by user287957
    I turned on my pc this morning and unity will not load at all. I have tried loading it manually using ctrl+alt+f1 and all i got from it was the following:- compiz (core) - Info: Loading plugin: core compiz (core) - Info: Starting plugin: core compiz (core) - Info: Loading plugin: ccp compiz (core) - Info: Starting plugin: ccp compizconfig - Info: Backend : gsettings compizconfig - Info: Integration : true compizconfig - Info: Profile : unity compiz (core) - Info: Loading plugin: composite compiz (core) - Info: Starting plugin: composite compiz (core) - Info: Loading plugin: opengl compiz (core) - Info: Starting plugin: opengl libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/r600_dri.so failed (/usr/lib/x86_64- linux-gnu/dri/r600_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/r600_dri.so failed (${ORIGIN}/dri/r600_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/r600_dri.so failed (/usr/lib/dri/r600_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: r600_dri.so libGL error: driver pointer missing libGL error: failed to load driver: r600 libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/swrast_dri.so failed (${ORIGIN}/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast compiz (core) - Info: Loading plugin: compiztoolbox compiz (core) - Info: Starting plugin: compiztoolbox compiz (core) - Info: Loading plugin: decor compiz (core) - Info: Starting plugin: decor compiz (core) - Info: Loading plugin: copytex compiz (core) - Info: Starting plugin: copytex compiz (core) - Info: Loading plugin: snap compiz (core) - Info: Starting plugin: snap compiz (core) - Info: Loading plugin: resize compiz (core) - Info: Starting plugin: resize compiz (core) - Info: Loading plugin: gnomecompat compiz (core) - Info: Starting plugin: gnomecompat compiz (core) - Info: Loading plugin: move compiz (core) - Info: Starting plugin: move compiz (core) - Info: Loading plugin: place compiz (core) - Info: Starting plugin: place compiz (core) - Info: Loading plugin: mousepoll compiz (core) - Info: Starting plugin: mousepoll compiz (core) - Info: Loading plugin: regex compiz (core) - Info: Starting plugin: regex compiz (core) - Info: Loading plugin: imgpng compiz (core) - Info: Starting plugin: imgpng compiz (core) - Info: Loading plugin: vpswitch compiz (core) - Info: Starting plugin: vpswitch compiz (core) - Info: Loading plugin: grid compiz (core) - Info: Starting plugin: grid compiz (core) - Info: Loading plugin: animation compiz (core) - Info: Starting plugin: animation compiz (core) - Info: Loading plugin: expo compiz (core) - Info: Starting plugin: expo compiz (core) - Info: Loading plugin: session compiz (core) - Info: Starting plugin: session compiz (core) - Info: Loading plugin: wall compiz (core) - Info: Starting plugin: wall compiz (core) - Info: Loading plugin: fade compiz (core) - Info: Starting plugin: fade compiz (core) - Info: Loading plugin: unitymtgrabhandles compiz (core) - Info: Starting plugin: unitymtgrabhandles compiz (core) - Info: Loading plugin: ezoom compiz (core) - Info: Starting plugin: ezoom compiz (core) - Info: Loading plugin: workarounds compiz (core) - Info: Starting plugin: workarounds compiz (core) - Info: Loading plugin: scale compiz (core) - Info: Starting plugin: scale compiz (core) - Info: Loading plugin: unityshell compiz (core) - Info: Starting plugin: unityshell WARN 2014-06-03 10:55:31 unity.glib.dbus.server GLibDBusServer.cpp:586 Can't register object 'com.canonical.Autopilot.Introspection' yet as we don't have a connection, waiting for it... WARN 2014-06-03 10:55:31 unity.glib.dbus.server GLibDBusServer.cpp:586 Can't register object 'com.canonical.Unity.Debug.Logging' yet as we don't have a connection, waiting for it... compiz (unityshell) - Error: GL_ARB_vertex_buffer_object not supported compiz (core) - Error: Plugin initScreen failed: unityshell compiz (core) - Error: Failed to start plugin: unityshell compiz (core) - Info: Unloading plugin: unityshell X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 18 (X_ChangeProperty) Resource id in failed request: 0x4000006 Serial number of failed request: 9909 Current serial number in output stream: 9913 It was all working fine yesterday but this morning there was nothing. Please help Many Thanks

    Read the article

  • Can't load vector font in Nuclex Framework

    - by ProgrammerAtWork
    I've been trying to get this to work for the last 2 hours and I'm not getting what I'm doing wrong... I've added Nuclex.TrueTypeImporter to my references in my content and I've added Nuclex.Fonts & Nuclex.Graphics in my main project. I've put Arial-24-Vector.spritefont & Lindsey.spritefont in the root of my content directory. _spriteFont = Content.Load<SpriteFont>("Lindsey"); // works _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes I get this error on the _testFont line: File contains Microsoft.Xna.Framework.Graphics.SpriteFont but trying to load as Nuclex.Fonts.VectorFont. So I've searched around and by the looks of it it has something to do with the content importer & the content processor. For the content importer I have no new choices, so I leave it as it is, Sprite Font Description - XNA Framework for content processor and I select Vector Font - Nuclex Framework And then I try to run it. _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes again I get the following error Error loading "Arial-24-Vector". It does work if I load a sprite, so it's not a pathing problem. I've checked the samples, they do work, but I think they also use a different version of the XNA framework because in my version the "Content" class starts with a capital letter. I'm at a loss, so I ask here. Edit: Something super weird is going on. I've just added the following two lines to a method inside FreeTypeFontProcessor::FreeTypeFontProcessor( Microsoft::Xna::Framework::Content::Pipeline::Graphics::FontDescription ^fontDescription, FontHinter hinter, just to check if code would even get there: System::Console::WriteLine("I AM HEEREEE"); System::Console::ReadLine(); So, I compile it, put it in my project, I run it and... it works! What the hell?? This is weird because I've downloaded the binaries, they didn't work, I've compiled the binaries myself. didn't work either, but now I make a small change to the code and it works? _. So, now I remove the two lines, compile it again and it works again. Someone care to elaborate what is going on? Probably some weird caching problem!

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Edinburgh this Thurs (25th) - Rob Carrol talks about how to build a high performance, scalable repor

    - by tonyrogerson
    Scottish Area SQL Server User Group Meeting, Edinburgh - Thursday 25th March An evening of SQL Server 2008 Reporting Services Scalability and Performance with Rob Carrol, see how to build a high performance, scalable reporting platform and the tuning techniques required to ensure that report performance remains optimal as your platform grows. Pizza and drinks will be provided! Register at http://www.sqlserverfaq.com/events/221/SQL-Server-2008-Reporting-Services-Scalability-and-Performance.aspx...(read more)

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Google I/O 2011: High-performance GWT: best practices for writing smaller, faster apps

    Google I/O 2011: High-performance GWT: best practices for writing smaller, faster apps David Chandler The GWT compiler isn't just a Java to JavaScript transliterator. In this session, we'll show you compiler optimizations to shrink your app and make it compile and run faster. Learn common performance pitfalls, how to use lightweight cell widgets, how to use code splitting with Activities and Places, and compiler options to reduce your app's size and compile time. From: GoogleDevelopers Views: 4791 21 ratings Time: 01:01:32 More in Science & Technology

    Read the article

  • Google I/O 2012 - Building High Performance Mobile Web Applications

    Google I/O 2012 - Building High Performance Mobile Web Applications Ryan Fioravanti Learn what it takes to build an HTML5 mobile app that will wow your users. This session will focus on speed, offline support, UI layouts, and the tools necessary to set up a productive development environment. Come to this session if you're looking to make a killer mobile web app that stands out amongst the competition. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 33 0 ratings Time: 49:43 More in Science & Technology

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Auto Save and Auto Load Game onto the Device's Storage Concept Question

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game feature only every time the newbies open the first app then continues it on the other day? I tried making a sprite that is moving, starting at the center. When I close and re-open the app, the sprite goes back to the center instead of the last coordinate where the sprite land on this part (i.e. at the top). The thing I want to know how the sequence of saving and loading goes like this: I open the app The starting sprite at the center. It displays a coordinate of the sprite plus number of times does the sprite move. I exit the app that automatically saves the game without notice. Finally, when I re-opened it, it automatically loads the game retaining the number of times the sprite move, coordinates, and the sprite's area landed. These steps above are similar, but not the sprite movement test app, to the sequence of saving and loading the game's level and record in Jewel Stackers for the Android app. And, by default, if there is no SD card in any tab or phone that runs on Android, does it automatically save/load onto the internal drive or the APK file itself? Is it also useful to use auto save and auto load feature for protecting and fetching informations (i.e. fastest time, last time where the sprite is located via coordinates, etc.)?

    Read the article

  • Display Call To Action bar on page load [migrated]

    - by dasickle
    I am using the following code to load the bar on click but I can't figure our how to load it on page load automatically. <script> var autohide; $('body').prepend('<div id="bn-bar"><b>DON\'T MISS OUT!</b> Only 9 seats remain for the Google Tag Manager training on May 22! <a href="#">Book Your Seat Today!</a><div id="hider"> </div></div>'); $(document).ready(function(){ $("#hider").click(function(){ $("#bn-bar").animate({ top: "-50" }, "fast","linear", function(){}); }) $("#bn-bar").mouseover(function(){clearTimeout(autohide);}); setTimeout(function(){$("#bn-bar").animate({top: "0"}, "slow","linear", function(){});},2500); autohide = setTimeout(function(){$("#bn-bar").animate({top: "-30"}, "fast","linear", function(){});},10000); }) </script> Basically I am trying to load a the message when user enters my website and I will be inserting it via Google Tag Manager. Below is a page where I found the code: Creative Tag Manager – Ads, Promotions, and Visitor Messaging -Lunametrics

    Read the article

  • looking for dark, high contrast theme

    - by d3pd
    I am looking for a dark, contrast theme. Most dark, high-contrast themes I have tried do not give sufficient contrast to many edges, such as the edges of buttons, tabs, windows and other borders. Could you recommend themes that would provide such features? Alternatively, could you recommend a guide to changing a fairly dark theme, such as Numix-Cyanogen, such that it would feature light button, tab, window and other borders? Roughly visually, what I mean is to from this: to something like this:

    Read the article

  • SharePoint Unit Testing and Load Testing Finally?

    - by Kit Ong
    It has always been a real pain to incorporate extensive SharePoint Unit Testing and Load Testing in a project, could Visual Studio 2012 finally make this easier? It certaining looks like it, here's a brief overview on SharePoint support in Visual Studio 2012. Load testing – We now support load testing for SharePoint out of the box. This is more involved than you might imagine due to how dynamic SharePoint is. You can’t just record a script and play it back – it won’t work because SharePoint generates and expects dynamic data (like GUIDs). We’ve built the extensions to our load testing solution to parse the dynamic SharePoint data and include it appropriately in subsequent requests. So now you can record a script and play it back and we will dynamically adjust it to match what SharePoint expects.Unit testing – One of the big problems with unit testing SharePoint is that most code requires SharePoint to be running and trying to run tests against a live SharePoint instance is a pain. So we’ve built a SharePoint “emulator” using our new VS 2012 Fakes & Stubs capability. This will make unit testing of SharePoint components WAY easier.Read more in the link belowhttp://blogs.msdn.com/b/bharry/archive/2012/09/12/visual-studio-update-this-fall.aspx

    Read the article

  • What Keywords Rank High in Google?

    Learn the basics of how keywords help you rank high in the search engines. Implementing the proper keywords into your websites, blogs and other web 2.0 strategies can generate a ton of free traffic.

    Read the article

  • GDD-BR 2010 [1G] Android: Building High-Performance Applications

    GDD-BR 2010 [1G] Android: Building High-Performance Applications Speaker: Tim Bray Track: Android Time: G [16:30 - 17:15] Room: 1 Level: 151 Build Android applications that are smooth, fast, responsive, and a pleasure to use. Also, learn about the tools and techniques we use to track down and fix performance problems. From: GoogleDevelopers Views: 20 0 ratings Time: 33:34 More in Science & Technology

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >