Search Results

Search found 11296 results on 452 pages for 'rendering engine'.

Page 168/452 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • How do I escape a string for a shell command in nodejs (V8 Javascript engine)?

    - by Maciek
    In nodejs, the only way to execute external commands is via sys.exec(cmd). I'd like to call an external command and give it data via stdin. In nodejs there does yet not appear to be a way to open a command and then push data to it (only to exec and receive its standard+error outputs), so it appears the only way I've got to do this right now is via a single string command such as: var dangerStr = "bad stuff here"; sys.exec("echo '" + dangerStr + "' | somecommand"); Most answers to questions like this have focused on either regex which doesn't work for me in nodejs (which uses Google's V8 Javascript engine) or native features from other languages like Python. I'd like to escape dangerStr so that it's safe to compose an exec string like the one above. If it helps, dangerStr will contain JSON data.

    Read the article

  • How to create your own advert engine for an Android App?

    - by Richard Green
    I have an Android App and I would like to start putting non-intrusive advert into the app. However, I have the benefit of knowing exactly what products I would like to put in these adverts (which will basically be amazon "similar products" type things and a few other suppliers). Is there any ad-engine out there that will allow me to do this? The ones I see already just put what they think are suitable. I have scoured and I can't find an example of this... Any ideas? Should I just bite the bullet and write my own classes to do this ?

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • Am I a discoverer of a bug in the WPF engine?

    - by bitbonk
    We have a MFC 8 application compiled with /CLR that contains a larger amount of Windows Forms UserControls wich again contain WPF user controls using ElementHost. Due to the architecture of our software we can not use HwndHost directly. We observed an extremely strange behavior here that we can not make any sense of: When the CPU load is very high during startup of the application and there are a lot live of ElementHost instances, the whole property engine completely stops working. For example animations that usually just work fine now never update the values of the bound properties, they just stay at some random value after startup. When I set a property that is not bound to anything the value is correctly stored in the dependency property (calling the getter returns the new value) but the visual representation never reflects that. I set the background to red but the background color does not change. We tested this on a lot of different machines all running Windows XP SP2 and it is pretty reproducible. The funny thing here is, that there is in fact one situation where the bound properties actually pickup a new value from the animation and the visual gets updated based on the property values. It is when I resize the ElementHost or when I hide and reshow the parent native control. As soon as I do this, properties that are bound to an animation pickup a new value and the visuals rerender based on the new property values - but just once - if I want to see another update I have to resize the ElementHost. Do you have any explanation of what could be happening here or how I could approach this problem to find it out? What can I do to debug this? Is there a way I can get more information about what WPF actually does or where WPF might have crashed? To me it currently seems like a bug in WPF itself since it only happens at high CPU load at startup.

    Read the article

  • Looking for an email template engine for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • How can I create a rules engine without using eval() or exec()?

    - by Angela
    I have a simple rules/conditions table in my database which is used to generate alerts for one of our systems. I want to create a rules engine or a domain specific language. A simple rule stored in this table would be..(omitting the relationships here) if temp > 40 send email Please note there would be many more such rules. A script runs once daily to evaluate these rules and perform the necessary actions. At the beginning, there was only one rule, so we had the script in place to only support that rule. However we now need to make it more scalable to support different conditions/rules. I have looked into rules engines , but I hope to achieve this in some simple pythonic way. At the moment, I have only come up with eval/exec and I know that is not the most recommended approach. So, what would be the best way to accomplish this?? ( The rules are stored as data in database so each object like "temperature", condition like "/=..etc" , value like "40,50..etc" and action like "email, sms, etc.." are stored in the database, i retrieve this to form the condition...if temp 50 send email, that was my idea to then use exec or eval on them to make it live code..but not sure if this is the right approach )

    Read the article

  • Looking for an email/report templating engine with database backend - for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • mysql server upgrade problem from 5.0 to 5.1

    - by Avinash
    Hi I have upgraded my mysql server from 5.0 to 5.1. But i am having a problem related to tables for InnoDB storage Engine. My default engine is InnoDB, So it is enabled in my server. But tables with InneDB engine are not displaying in phpmyadmin. Tables with MyISAM are displaying properly. and also i can't fire a query on the table with InnoDB Engine. Thanks Avinash

    Read the article

  • Unknown Apache2 + PHP5 FastCGI 500 error .. caused by search engine bots?

    - by rdjurovich
    My Ubuntu server is configured with Apache 2.2.8 and PHP 5.2.4-2ubuntu5.18 in FastCGI mode. Everything works well, except I am seeing 500 errors that only seem to come from bots accessing the server.. for example (access.log): x.125.71.104 - - [16/Nov/2011:10:27:39 +1100] "GET / HTTP/1.1" 500 41377 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" x.40.103.239 - - [16/Nov/2011:11:05:56 +1100] "GET / HTTP/1.0" 500 14717 "-" "Mozilla/5.0 (compatible; mon.itor.us - free monitoring service; http://mon.itor.us)" x.249.67.114 - - [14/Nov/2011:20:57:17 +1100] "GET / HTTP/1.1" 500 101 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" x.55.39.85 - - [14/Nov/2011:19:31:06 +1100] "GET / HTTP/1.1" 500 7032 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)._" It is my understanding that a 500 error will be thrown when the PHP process fails to respond to Apache, which could be caused by a fatal PHP error or if PHP runs out of processes.. so my assumption is that either the bots are hitting the server too hard, killing the PHP processes, or something in the request header from bots is causing a fatal error in my PHP script? If anyone can offer advice on this it would be greatly appreciated! Ryan

    Read the article

  • Plugin 'InnoDB' registration as a STORAGE ENGINE failed. On win 7

    - by NimChimpsky
    I have had to reinstall MySQL, however the service is failing to start with the above cause listed in evnt viewer. One solution is apparently to delete a couple of files prefixed with 'ib_logfile' which represent any old databases. However I do not have these files, and my service is still failing to start ... ? When I say I don't have these files I did a search using the windows search with zero results, and they are definitely not present in my mysql install directory. And I don't have the "documents and setting/appilcation data' folder referenced in link. In fact I only have only one mysql install directory, I know where that is - what do I need to delete/change ? The instance is configured OK, I ran that as administrator and it is listed in services, but the service itself fails to start Any tips, other than going over to postgresql ?

    Read the article

  • How can I set up a load balancer to direct all Search Engine Bot traffic to one server?

    - by Ryan
    We have a simple load balancer set up on Rackspace to 3 web server nodes. After reviewing our traffic and expenses, the largest bandwidth hog is Google Bot. Since on Rackspace we pay for bandwidth by the byte, we'd like to direct all traffic from GoogleBot to another host (MediaTemple) with unlimited bandwidth. We think this would cut our hosting bill several thousand dollars a month. Is this possible? Advisable?

    Read the article

  • Hard crash when drawing content for CALayer using quartz

    - by Lukasz
    I am trying to figure out why iOS crash my application in the harsh way (no crash logs, immediate shudown with black screen of death with spinner shown for a while). It happens when I render content for CALayer using Quartz. I suspected the memory issue (happens only when testing on the device), but memory logs, as well as instruments allocation logs looks quite OK. Let me past in the fatal function: - (void)renderTiles{ if (rendering) { //NSLog(@"====== RENDERING TILES SKIP ======="); return; } rendering = YES; CGRect b = tileLayer.bounds; CGSize s = b.size; CGFloat imageScale = [[UIScreen mainScreen] scale]; s.height *= imageScale; s.width *= imageScale; dispatch_async(queue, ^{ NSLog(@""); NSLog(@"====== RENDERING TILES START ======="); NSLog(@"1. Before creating context"); report_memory(); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"2. After creating color space"); report_memory(); NSLog(@"3. About to create context with size: %@", NSStringFromCGSize(s)); CGContextRef ctx = CGBitmapContextCreate(NULL, s.width, s.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); NSLog(@"4. After creating context"); report_memory(); CGAffineTransform flipTransform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, s.height); CGContextConcatCTM(ctx, flipTransform); CGRect tileRect = CGRectMake(0, 0, tileImageScaledSize.width, tileImageScaledSize.height); CGContextDrawTiledImage(ctx, tileRect, tileCGImageScaled); NSLog(@"5. Before creating cgimage from context"); report_memory(); CGImageRef cgImage = CGBitmapContextCreateImage(ctx); NSLog(@"6. After creating cgimage from context"); report_memory(); dispatch_sync(dispatch_get_main_queue(), ^{ tileLayer.contents = (id)cgImage; }); NSLog(@"7. After asgning tile layer contents = cgimage"); report_memory(); CGColorSpaceRelease(colorSpace); CGContextRelease(ctx); CGImageRelease(cgImage); NSLog(@"8. After releasing image and context context"); report_memory(); NSLog(@"====== RENDERING TILES END ======="); NSLog(@""); rendering = NO; }); } Here are the logs: ====== RENDERING TILES START ======= 1. Before creating context Memory in use (in bytes): 28340224 / 519442432 (5.5%) 2. After creating color space Memory in use (in bytes): 28340224 / 519442432 (5.5%) 3. About to create context with size: {6324, 5208} 4. After creating context Memory in use (in bytes): 28344320 / 651268096 (4.4%) 5. Before creating cgimage from context Memory in use (in bytes): 153649152 / 651333632 (23.6%) 6. After creating cgimage from context Memory in use (in bytes): 153649152 / 783159296 (19.6%) 7. After asgning tile layer contents = cgimage Memory in use (in bytes): 153653248 / 783253504 (19.6%) 8. After releasing image and context context Memory in use (in bytes): 21688320 / 651288576 (3.3%) ====== RENDERING TILES END ======= Application crashes in random places. Sometimes when reaching en of the function and sometime in random step. Which direction should I look for a solution? Is is possible that GDC is causing the problem? Or maybe the context size or some Core Animation underlying references?

    Read the article

  • XSLT is not the solution you're looking for

    - by Jeff
    I was very relieved to see that Umbraco is ditching XSLT as a rendering mechanism in the forthcoming v5. Thank God for that. After working in this business for a very long time, I can't think of any other technology that has been inappropriately used, time after time, and without any compelling reason.The place I remember seeing it the most was during my time at Insurance.com. We used it, mostly, for two reasons. The first and justifiable reason was that it tweaked data for messaging to the various insurance carriers. While they all shared a "standard" for insurance quoting, they all had their little nuances we had to accommodate, so XSLT made sense. The other thing we used it for was rendering in the interview app. In other words, when we showed you some fancy UI, we'd often ditch the control rendering and straight HTML and use XSLT. I hated it.There just hasn't been a technology hammer that made every problem look like a nail (or however that metaphor goes) the way XSLT has. Imagine my horror the first week at Microsoft, when my team assumed control of the MSDN/TechNet forums, and we saw a mess of XSLT for some parts of it. I don't have to tell you that we ripped that stuff out pretty quickly. I can't even tell you how many performance problems went away as we started to rip it out.XSLT is not your friend. It has a place in the world, but that place is tweaking XML, not rendering UI.

    Read the article

  • Why does my terrain turn white when I get close to it?

    - by Starkers
    When I zoom in on my terrain it goes white: The further in I zoom, the greater the whiteness becomes. Is this normal? Is this to speed up rendering or something? Can I turn it off? I'm also getting these error messages in the console over and over again: rc.right != m_GfxWindow-GetWidth() || rc.bottom != m_GfxWindow-GetHeight() and GUI Window tries to begin rendering while something else has not finished rendering! Either you have a recursive OnGUI rendering, or previous OnGUI did not clean up properly. Does this bear any correlation on the issue? Update I create virtual desktops to flit between using the program Deskpot. Turning this program off and restarting has stopped the above errors appearing in the console. However, I still get white terrain when I zoom in. Not a single error message. I've restarted my computer to no avail. I have an Asus NVidia GeForce GTX 760 2GB DDR5 Direct CU II OC Edition Graphics Card. Any known issues? Update I don't think it's fog...

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • Spring Webflow: cannot get flow execution url at action phase of portlet

    - by tabdulin
    The following exception is thrown: Caused by: java.lang.IllegalStateException: A flow execution action URL can only be obtained in a RenderRequest using a RenderResponse at org.springframework.webflow.context.portlet.PortletExternalContext.getFlowExecutionUrl(PortletExternalContext.java:2 06) at org.springframework.webflow.engine.impl.RequestControlContextImpl.getFlowExecutionUrl(RequestControlContextImpl.java :178) at org.springframework.webflow.mvc.view.AbstractMvcView.render(AbstractMvcView.java:172) at org.springframework.webflow.engine.ViewState.render(ViewState.java:282) at org.springframework.webflow.engine.ViewState.refresh(ViewState.java:241) at org.springframework.webflow.engine.ViewState.resume(ViewState.java:219) at org.springframework.webflow.engine.Flow.resume(Flow.java:545) at org.springframework.webflow.engine.impl.FlowExecutionImpl.resume(FlowExecutionImpl.java:259) ... 62 more It seems for me like resuming execution of flow at action phase tries to do render phase's stuff. Any ideas?

    Read the article

  • How to use Private Inheritence aka C++ in C# and Why not it is present in C#

    - by Vijay
    I know that private inheritance is supported in C++ and only public inheritance is supported in C#. I also came across an article which says that private inheritance usually defines a HAS-A relationship and kind of an aggregation relationship between the classes. EDIT: C++ code for private inheritance: The "Car has-a Engine" relationship can also be expressed using private inheritance: class Car : private Engine { // Car has-a Engine public: Car() : Engine(8) { } // Initializes this Car with 8 cylinders using Engine::start; // Start this Car by starting its Engine }; Now, Is there a way to create a HAS-A relationship between C# classes which is one of the thing that I would like to know - HOW? Another curious question is why doesn't C# support the private (and also protected) inheritance ? - Is not supporting multiple implementation inheritance a valid reason or any other? Is private (and protected) inheritance planned for future versions of C#? Will supporting the private (and protected) inheritance in C# make it a better and widely used language?

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Proper snowball analyzer configuration when using Grails Searchable Plugin

    - by Wirsbro
    To improve stemming we want to switch from the default analyzer to snowball, however, having a lot of difficulty with the proper settings and would appreciate any help. In Environment: - Sun's Java 1.6.16 - Grails 1.2.2 - Searchable Plug-In 0.5.5 Config.groovy: Have tried both settings: compassSettings = ['compass.engine.analyzer.stemmed.type': 'snowball', 'compass.engine.analyzer.stemmed.name': 'English'] compassSettings = ['compass.engine.analyzer.snowball.type': 'snowball', 'compass.engine.analyzer.snowball.name': 'English', 'compass.engine.analyzer.search.type': 'snowball', 'compass.engine.analyzer.search.name': 'English'] Search.groovy - The Invocation: def searchResult = searchableService.search(params.q, withHighlighter: { highlighter, index, sr if (!sr.highlights) { sr.highlights = [] } try { sr.highlights[index] = highlighter.fragments("content")[0..2].join(" ") } catch (IndexOutOfBoundsException ex) { sr.highlights[index] = highlighter.fragment("content") } }) def suggestion = searchableService.suggestQuery(params.q) if (suggestion != params.q) { searchResult.suggestedQuery = suggestion }

    Read the article

  • Troubleshooting High Load on Plesk LAMP Dedicated Server

    - by Callmeed
    I have 2 nearly identical dedicated servers with the same provider. They also run a nearly identical software stack: RedHat 5 64-bit, Plesk, PHP, Apache, & MySQL. We use them for hosting custom sites we build. The problem is, while our 1st server has a load average (in top) of around 0.3, the 2nd server consistently has a load average of around 4.0 or higher. Basic functions in Plesk are delayed and there is a bit of latency when executing shell commands. Anyone have ideas why it would be so high? And why it would differ from our other server so much? Here is my current top output (sorted by %MEM) ... Any help is much appreciated ... top - 21:48:04 up 100 days, 4:28, 1 user, load average: 3.74, 4.20, 4.23 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.4%sy, 0.0%ni, 91.3%id, 7.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12290884k total, 11886452k used, 404432k free, 2920212k buffers Swap: 2096472k total, 244k used, 2096228k free, 6560692k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22536 apache 15 0 860m 547m 6484 S 0.0 4.6 0:10.96 httpd 26467 apache 15 0 859m 546m 6408 S 0.0 4.5 0:07.67 httpd 3620 apache 15 0 859m 545m 5552 S 0.0 4.5 0:06.15 httpd 1895 apache 15 0 858m 544m 6356 S 0.0 4.5 0:08.25 httpd 16933 apache 15 0 858m 544m 5488 S 0.0 4.5 0:01.57 httpd 6431 apache 15 0 856m 542m 6076 S 10.6 4.5 0:05.32 httpd 14417 apache 15 0 856m 542m 5568 S 0.0 4.5 0:03.88 httpd 15403 apache 15 0 855m 541m 5616 S 0.0 4.5 0:03.73 httpd 19165 apache 15 0 853m 539m 6252 S 0.0 4.5 0:12.40 httpd 15898 apache 15 0 852m 539m 5376 S 0.0 4.5 0:02.68 httpd 14401 apache 15 0 851m 538m 5460 S 0.0 4.5 0:02.97 httpd 15393 apache 15 0 851m 538m 5404 S 0.0 4.5 0:03.12 httpd 15427 apache 15 0 851m 538m 5496 S 0.0 4.5 0:02.44 httpd 14412 apache 15 0 851m 538m 5324 S 0.0 4.5 0:02.15 httpd 18330 apache 15 0 851m 537m 5136 S 0.0 4.5 0:01.30 httpd 18303 apache 15 0 848m 535m 5140 S 0.0 4.5 0:00.47 httpd 21190 apache 15 0 845m 533m 3988 S 0.0 4.4 0:00.33 httpd 15923 root 18 0 822m 521m 9928 S 0.0 4.3 10:04.81 httpd 22021 apache 15 0 828m 520m 4964 S 0.0 4.3 0:00.16 httpd 22146 apache 15 0 823m 515m 3016 S 0.0 4.3 0:00.02 httpd 22345 apache 15 0 822m 514m 2408 S 0.0 4.3 0:00.00 httpd 14721 apache 15 0 733m 510m 488 S 0.0 4.3 0:00.00 httpd 5094 root 15 0 1452m 122m 15m S 1.0 1.0 852:24.24 java 4636 mysql 15 0 532m 57m 6440 S 1.0 0.5 488:05.84 mysqld 4799 popuser 15 0 166m 53m 2368 S 0.0 0.4 0:36.64 spamd 16761 popuser 15 0 159m 46m 2312 S 0.0 0.4 0:00.38 spamd 4797 root 15 0 158m 45m 2448 S 0.0 0.4 0:01.27 spamd 5074 root 34 19 255m 20m 2144 S 0.0 0.2 1:37.53 yum-updatesd 9917 named 15 0 366m 9804 1980 S 0.0 0.1 0:10.26 named 4332 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.06 sw-engine-cgi 4341 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.07 sw-engine-cgi 4350 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.09 sw-engine-cgi 4352 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.11 sw-engine-cgi 4376 ntp 15 0 23388 5020 3896 S 0.0 0.0 0:00.58 ntpd 4331 sw-cp-se 15 0 61336 4572 1480 S 0.0 0.0 5:53.22 sw-cp-serverd 4213 haldaemo 15 0 31252 4460 1684 S 0.0 0.0 0:01.52 hald 4778 postgres 18 0 117m 4164 3484 S 0.0 0.0 0:00.11 postmaster 18555 root 16 0 98.3m 3716 2852 S 0.0 0.0 0:00.01 sshd 4488 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4489 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4492 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4493 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4490 sso 18 0 119m 3040 220 S 0.0 0.0 0:00.00 sw-engine-cgi

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • GWT with spring security not working on app engine live server.

    - by bedanand
    I configured gwt with spring and spring security that works fine on local development server on google app engine. I deployed to the appspot but there it shows critical error when i see on the log. and on the browser side shows 500 server error. log error Uncaught exception from servlet javax.servlet.UnavailableException: Initialization failed. at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:200) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:168) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:123) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:243) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5838) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5836) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:398) at com.google.net.rpc.impl.Server$2.run(Server.java:852) at com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56) at com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:576) at com.google.net.rpc.impl.Server.startRpc(Server.java:807) at com.google.net.rpc.impl.Server.processRequest(Server.java:369) at com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:442) at com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319) at com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290) at com.google.net.async.Connection.handleReadEvent(Connection.java:474) at com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:831) at com.google.net.async.EventDispatcher.internalLoop(EventDispatcher.java:207) at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:103) at com.google.net.rpc.RpcService.runUntilServerShutdown(RpcService.java:251) at com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run(JavaRuntime.java:404) at java.lang.Thread.run(Unknown Source) web.xml <web-app> <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>*.rpc</url-pattern> </servlet-mapping> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- Default page to serve --> <welcome-file-list> <welcome-file>PushUrl.html</welcome-file> </welcome-file-list> </web-app> appengine-web.xml <application>pushurl</application> <version>1</version> <!-- Configure java.util.logging --> <system-properties> <property name="java.util.logging.config.file" value="WEB-INF/logging.properties"/> </system-properties> <sessions-enabled>true</sessions-enabled> applicationContext.xml <security:http auto-config="true"> <security:intercept-url pattern="/**/users.rpc" access="ROLE_USER"/> <security:intercept-url pattern="/**/categories.rpc" access="ROLE_ADMIN"/> <security:intercept-url pattern="/css/**" filters="none"/> <security:intercept-url pattern="/login.jsp*" filters="none"/> <security:form-login login-page='/login.jsp' /> </security:http> <security:authentication-manager> <security:authentication-provider> <security:user-service> <security:user name="jimi" password="jimi" authorities="ROLE_USER, ROLE_ADMIN" /> <security:user name="bob" password="bob" authorities="ROLE_USER" /> </security:user-service> </security:authentication-provider> </security:authentication-manager> dispatcher-servlet.xml <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> /**/users.rpc=userService /**/categories.rpc=categoryService </value> </property> </bean> <bean id="userController" class="com.beda.pushurl.server.GwtRpcController"> <property name="remoteService" ref="userService"> </property> </bean> <bean id="userService" class="com.beda.pushurl.server.UserServiceImpl" > <property name="userDAO" ref="myUserDAO"></property> </bean> <bean id="categoryService" class="com.beda.pushurl.server.CategoryServiceImpl"> <property name="categoryDAO" ref="myCategoryDAO"></property> </bean> <bean id="myUserDAO" class="com.beda.pushurl.server.dao.UserDAOImpl"> </bean> <bean id="myCategoryDAO" class="com.beda.pushurl.server.dao.CategoryDAOImpl"> </bean>

    Read the article

  • Attach to Process in Visual Studio

    - by Daniel Moth
    One option for achieving step 1 in the Live Debugging process is attaching to an already running instance of the process that hosts your code, and this is a good place for me to talk about debug engines. You can attach to a process by selecting the "Debug" menu and then the "Attach To Process…" menu in Visual Studio 11 (Ctrl+Alt+P with my keyboard bindings), and you should see something like this screenshot: I am not going to explain this UI, besides being fairly intuitive, there is good documentation on MSDN for the Attach dialog. I do want to focus on the row of controls that starts with the "Attach to:" label and ends with the "Select..." button. Between them is the readonly textbox that indicates the debug engine that will be used for the selected process if you click the "Attach" button. If you haven't encountered that term before, read on MSDN about debug engines. Notice that the "Type" column shows the Code Type(s) that can be detected for the process. Typically each debug engine knows how to debug a specific code type (the two terms tend to be used interchangeably). If you click on a different process in the list with a different code type, the debug engine used will be different. However note that this is the automatic behavior. If you believe you know best, or more typically you want to choose the debug engine for a process using more than one code type, you can do so by clicking the "Select..." button, which should yield a "Select Code Type" dialog like this one: In this dialog you can switch to the debug engine you want to use by checking the box in front of your desired one, then hit "OK", then hit "Attach" to use it. Notice that the dialog suggests that you can select more than one. Not all combinations work (you'll get an error if you select two incompatible debug engines), but some do. Also notice in the list of debug engines one of the new players in Visual Studio 11, the GPU debug engine - I will be covering that on the C++ AMP team blog (and no, it cannot be combined with any others in this release). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >