Search Results

Search found 684 results on 28 pages for 'pipeline'.

Page 16/28 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • iPhone Open GL ES using FBX - How do I import animations from FBX into iPhone?

    - by Dominic Tancredi
    I've been researching this extensively. We have a game that's 90% complete, using custom game logic in iPhone 4.0. We've been asked to import a 3D model and have it animate when various events happen in the game. I've put together an OpenGL view (based on Eagl and several examples), and used Blender to import the model, as well as Jeff LeMarche's script to export the .h file. After much trial, it worked, and I was able to show a rotating model (unskinned). However, the 3d artist hadn't UV unwrapped the model, so provided me a new model, this one as a Maya file, along with animation in a FBX format, a .obj file, and .tga texture unwrapped. My question is : how can I use FBX inside OpenGL ES inside iPhone to run through animations? And what's the pipeline to get this Maya file into Blender to be able to create a .h file. I've tried the obj2opengl however the model is missing normals (did it have it in the first place?) and the skin isn't applying at all (possibly a code issue, something I think I can fix). I'm trying to use Jeff LeMarche's animation tutorial but can't figure out how to get the model files into a proper .h file for use. Any advice?

    Read the article

  • Adeos's role w.r.t Linux

    - by Anisha Kaul
    The event pipeline The fundamental Adeos structure one must keep in mind is the chain of client domains asking for event control. A domain is a kernelbased software component which can ask the Adeos layer to be notified of: · Every incoming external interrupt, or autogenerated virtual interrupt; · Every system call issued by Linux applications, · Other system events triggered by the kernel code (e.g. Linux task switching, signal notification, Linux task exits etc.). From: Life with Adeos: http://www.xenomai.org/documentation/xenomai-2.4/pdf/Life-with-Adeos-rev-B.pdf Question: Adeos is supposed to be between the hardware and the Linux kernel, I can understand about Adeos telling the Linux about hardware interrupts but Why should Adeos know about the "system call" issued by Linux?

    Read the article

  • IIS 7.5 default permission - is restriction needed?

    - by Caroline Beltran
    I am using IIS 7.5 and I do not need to explicitly specify permissions for my ISAPI application to execute. Additionally, the application can create subdirectories, create and delete files without me specifying permissions. Since I am using the default permissions, checked to see if web.config was safe from prying eyes over the web, and it can’t be read which is good. My app also creates some .log and .ini files which are also not viewable over the web. I did notice that .txt files are viewable. I really don’t know how default permissions allow my app to do so much. Is this safe or do I need to lock down? To be honest, I don’t know what accounts to restrict. App details: My ISAPI has an ‘allowed’ entry in ISAPI and CGI Restrictions Folder and subfolders containing my application has ‘default’ permissions set. Application pool is using ‘classic’ pipeline mode and no managed code. Pass-through authentication in use. Thank you for your time

    Read the article

  • Cannot access Application configured on local IIS 7 using IP/machine name

    - by SilverHorse
    I have a windows 7 machine 64 bit and IIS 7 I have a default website on the IIS.Its binding is {IP: All Unassigned , Port:80 , Host Name : blank} I have added a new asp.net application to that website,mapped physical path, have set the virtual path as "MyWebApp". Application pool for "MyWebApp" is "DefaultAppPool" {.Net Framework: 4.0 ; Managed Pipeline Mode: Classic} The problem I am facing is I can access the website using http://localhost, http://IP.IP.IP.IP and http://MyMachineName But I can not access the Application other than using http://localhost/MyWebApp What should I do if I want to access the webapp using http://MyMachineName/MyWebApp OR http://IP.IP.IP.IP/MyWebApp Please note : I have already created an inbound rule to allow all HTTP traffic for port 80 in firewall settings.

    Read the article

  • Create Google Maps screenshots at regular intervals

    - by Dave Jarvis
    Background People are concerned that building a pipeline to the West Coast of Canada will increase the number of oil tankers, thus increasing the probability of a major oil spill, thereby creating an environmental catastrophe. The AIS Live Ships Map website captures real-time Marine Traffic updates using a Google Maps interface. While it is possible to obtain data from an AIS data feed, often the feeds are either pay-for-use, or otherwise encumbered with license restrictions. Problem The AIS Live Ships website presents a map in the browser: The map above has had its location interactively changed to focus on the area in question: the northern straight of Vancouver Island. Question How would you create a service that captures the map every 30 minutes and that could run, with neither user-intervention nor a significant memory footprint, for a few years? Idea #1 Create a virtual machine. Install and run a light-weight browser. Use Shutter to take captures at regular intervals. Idea #2 Use Python's Ghost Webkit to automate the captures. Thank you!

    Read the article

  • Per-vertex animation with VBOs: VBO per character or VBO per animation?

    - by charstar
    Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs), if possible. Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will likely be at different frames of the same animation at the same time). Assume color and texture coordinate buffers are static. Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? I've found a lot of information on what you can do, but no real best practices. Are there other considerations or methods that I am missing?

    Read the article

  • App_offline.htm and SharePoint and wholly contained images&hellip;

    - by Shawn Cicoria
    The question came up today if we could use an “app_offline.htm” file along with HTML in that file that would reference images. First, I wasn’t 100% sure if the app_offline.htm would work, but it sure did.  Since it’s just the Asp.net hosting process that detects the file, it circumvents loading any HttpApplications (SharePoint) beyond just serving up the HTML content. The second question was about having something more than text, specifically <img> tags.  So, since the HttpHandlers are taking all requests for all resources through the Asp.net pipeline, as soon as the app_offline.htm file is there, nothing else will get served from within that web application.    Sure, we could host the file (images) outside the web app, but what fun would that be? So, I found this link on using an image in app_offline.htm http://pbodev.wordpress.com/2009/12/20/app_offline-htm-with-an-image-yes-we-can/ Turns out, the src tag (in fact many tags) can take a stream of data represented by a mime type and base64 encoding inline – such as: <img style="height:515px;width:700px;border-width:0px;" src="data:image/jpeg;base64,/9j/4AAQSkZJRgA   One of the problems we had was the image was too large; so, sliced up the image, but ended up with spaces between each of the slices.  Low and behold, it works with CSS as well.   <style type="text/css"> .Slice_1_jpg { position: absolute; left:0px; top:0px; width:1011px; height:148px; background: url("data:image/jpeg;base64,/9j/4AAQSkZ   And the body is as follows: <body> <div class="Slice_1_jpg"> </div>   For this, I wrote a little Asp.Net site that, using a file upload control, will generate the necessary contents of what needs to go in the “data” value for the stream.  A link to the code is here: http://cicoria.com/downloads/CreateBase64OfImage.zip

    Read the article

  • Grepping grep output fails

    - by viraptor
    I'm trying to grep the output of ngrep. Unfortunately when I add another grep to the pipeline, I get no output at all. It can be some other command too - cat / grep / tee - everything breaks the chain. Example: # this works: $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' -- U +1.469535 xxx:5060 -> xxx:5060 SIP/2.0 180 Ringing. -- U +0.001384 xxx:5060 -> xxx:2048 SIP/2.0 180 Ringing. but #these don't: $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | egrep '^U' $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | cat $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | tee test If I use cat somefile instead of ngrep at the start, everything works as expected. Any ideas what could go wrong here?

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • OT: Fixing choppy video playback on OS X

    - by terrencebarr
    This is a bit off-topic but I wanted to share because it seems a lot of people are running into issues with choppy video playback and stutter on Mac OS X. I am using a Mac Mini with Snow Leopard (10.6.8) as a home media center and it has worked great in the past, playing back music and videos from multiple sources (web, quicktime, VLC, EyeTV). A few weeks ago the video playback from all my sources started to become choppy, to stutter, and often the picture would hang for seconds at a time. Totally unusable. Drove me nuts for two weeks. After much research and trial-and-error it turns out the problem was an outdated Flash Player which seems to have messed up the video pipeline for the entire system. The short is, I updated the Flash Player to version 11 directly from the Adobe web site, rebooted the Mac Mini, and all is well again! Judging from the various posts across the web, video playback appears to be a fairly widespread problem for Mac users and I hope this helps some of you out there! And I can’t wait to get rid of Flash altogether – I can’t remember the times it has crashed my browser, hung my system, and screwed up things. Thanks Adobe ;-( Cheers, – Terrence Filed under: Uncategorized Tagged: Adobe Flash, Mac OS X

    Read the article

  • What Technology can Render Medium Scale 3d Environments in a Web-Browser

    - by JakeM
    I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. I am trying to decide what technology/libraries to use that will create a web-app that will work on Android-Web-Browser, iOS-Safari, IE9, Safari, Firefox and Chrome. And also what technology will provide speed in development. I understand that this is 'asking for my cake and eating it too'/'asking for the moon' but I don't know all the technologies out there - so there may be advanced libraries that can render 3d environments across many web-browsers including the main smart phone ones and I dont know of them. The 3d rendering would not be highly detailed buildings or water with effects, but rather simple 3d representations of these objects. The environment would be navigable by dragging around and you could view the landscape in layers(view only contour lines, view only underground pipelines, view only sewerage pipes, etc.). Are there any 3d libraries for web-browsers out there? Is there a way to run OpenGL(or OpenGL ES) through a webbrowser? What technology would you use if you were making this kind of app/web app that should work on desktop Windows, Android, iOS and WindowsPhone? Is there any technology I have failed to mention that would be good for this kind of project? I am tending towards a Browser Driven Web App because I get that cross platform ability(where it even works on linux and MacOS by using compatible web-browsers). Also I know of CSS3 transforms that can create cubes that can rotate in 3d space(NOTE only works for WebKit browsers - so no IE :( ). But I don't know if CSS3 is robust enough to render whole 3d environments? Do you think it could? Maybe I could use HTML5 canvas's for this? Can Google maps create custom 3d maps?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • Load order in XNA?

    - by marc wellman
    I am wondering whether the is a mechanism to manually control the call-order of void Game.LoadContent() as it is the case with void Game.Draw(GameTime gt) by setting int DrawableGameComponent.DrawOrder ? except the order that results from adding components to the Game.Components container and maybe there exists something similar with Game.Update(GameTime gt) ? UPDATE To exemplify my issue consider you have several game components which do depends to each other regarding their instantiation. All are inherited from DrawableGameComponent. Now suppose that in one of these components you are loading a Model from the games content pipeline and add it to some static container in order to provide access to it for other game components. public override LoadContent() { // ... Model m = _contentManager.Load<Model>(@"content/myModel"); // GameComponents is a static class with an accessible list where game components reside. GameComponents.AddCompnent(m); // ... } Now it's easy to imagine that this components load method has to precede other game components that do want to access the model m in their own load method.

    Read the article

  • Bash: create anonymous fifo

    - by Adrian Panasiuk
    We all know mkfifo and pipelines. The first one creates a named pipe, thus one has to select a name, most likely with mktemp and later remember to unlink. The other creates an anonymous pipe, no hassle with names and removal, but the ends of the pipe get tied to the commands in the pipeline, it isn't really convenient to somehow get a grip of the file descriptors and use them in the rest of the script. In a compiled program, I would just do ret=pipe(filedes); in Bash there is exec 5<>file so one would expect something like "exec 5<> -" or "pipe <5 >6" -is there something like that in Bash?

    Read the article

  • CodePlex Daily Summary for Friday, October 19, 2012

    CodePlex Daily Summary for Friday, October 19, 2012Popular ReleasesOrchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...TFS 2012 Server/service Setup for Demo: TfsDemo_1.0.0.2: Release 1.0.0.2 New Stuff Feature 1 - Now add team favorite queries using the Tfs demo setup application. Simply specify the name of the work item query in the demoConfig.xml file and let the application work its magic. Feature 2 - Exclude the sections you do not want to be run as part of the demo. Mark the sections you don't want to run during the demo with the attribute Run="false" in the demoConfig.xml. Bug Fix 1 - If the DemoConfig.xml contains users or email addresses in work item a...XamlImageConverter: Xaml Image Converter 3.2: VisualStudio Integration Installer is now a VSIX Extension.Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...WPUtils: WPUtils 1.3: Blend SDK for Silverlight provides a HyperlinkAction which is missing in Blend SDK for Windows Phone. This release adds such an action which makes use of WebBrowserTask to show web page. You can also bind the hyperlink to your view model. NOTE: Windows Phone SDK 7.1 or higher is required.Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.NDatabase - C# Lightweight Object Database: NDatabase 2.0.1 Release: This release contains stable version of NDatabase C# Lightweight Object Database Content: binaries (dll + pdb) sources (sources, unit tests, samples) Changes: namespaces, dll name both are changed to NDatabase2 query API is changed. Now it is using generics in every possible place changing the way, how fields from class are stored - for now they are ordered by name which allows db on working well even if someone will change the order of fields in class definition (BREAKING CHANGE) A...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...New Projects7COM0207 DIY Wedding Cake: DIY Wedding Cake SiteAnt: AntCms.ArduUtilityLibrary for Arduino: ArduUtilityLibrary (AUL) is a library to assist the development of Arduino based programsbelloscoursework: Coursework to create a a web 2.0 website that support content creation.Code Jumper: Code-Jumper allows you to navigate your code by displaying a map of your declarations on the side panel of your editor. Dean's Web Scripting & Content Creation Project: Web Scripting & Content Creation Project for MSc Computer Science at Herts University.Distributed System for Medical Providers: This is a draft circulated for medical supplies providerDockPanel Suite VS2012 Look: Dock Panel Suite Control C# Visual Studio 2012 Design LookEntity Framework Json Serializer: Solve the Circular Reference problem when using Entity Framework with Asp.NET MVC JsonResult.ExpressGrid: A javascript gridFimClient: Our library - Predica.FimCommunication - for talking to FIM (Forefront Identity Manager) web servicesGetDev.NET - Mvc Talk: Sample code for local user group talk about ASP.NET MVCHelp Desk: Sistema para teste do mvc 3HospitalManagementSystem: Summary This system is for -handle channeling -handle lab reportsjQuery Filedrop: In this demo I will demonstrate using HTML5 compatible browsers to drag and drop files and save the content into a SQL2012 database. JS DNN Task Manager: This is a DNN tutorial to create a task manager projectKinect - Finger and gesture recognition: Find fingertips and pointing direction, record and recognize finger gestures. All this with the depth stream from the Kinect sensor.MarkusUtility: Utilities used in other projectsMASSIVE DATA TRANSFER OPTIMIZATIONS: This is project is used find an efficient way to transfer the massive data via TCP/IP. Minesweeper by S. Joshi: A user-created version of Minesweeper.NETFOX CATAN: very goodPrestazioni e affidabilità: Progetto di prestazioni ed affidabilità del corso di informatica, università Ca' Foscari di VeneziaProyecto Mammut: Este es un proyecto cuyo objetivo es georeferenciar la oferta commercial de la ciudad de Manta,Ecuador mediante puntos de referencias basados en T. Público.Razor Exercise 1: Just an exercise....Sannel Helpers: Varies extension methods I have created.Service Pipeline: A simple library for implementing a service pipeline with your existing service contracts.Set Last Access: Simple console client which can scans folder tree and set LastAccess time by times of newest item in folderSharePoint Web Part Replacement: A set of classes and sample feature receiver used to replace web parts on a SharePoint site collection (SPSite). Sitecore Image Placeholder: Sitecore Image Placeholder module helps Content Editors by displaying placeholders with correct image size when field of type Image is empty.SlidePuzzle: A slide puzzle.StackAttack: StackAttack is a .NET client for use with the Stack Exchange API v2.1.testASPsite: Nothing of interesttestdd18102012git01: dtestdd18102012tfs02: stestddgit10182012git03: tTransform Manager Task for creating assets in Windows Azure Media Services: Task for Transform Manager that creates assets in Windows Azure Media Services and requests a stream locator. Used to push assets into the cloud for streaming.Trie for C#: Implementing the Trie structer in C#/.Netwsccm11aah: My Project for Web Scripting and Content Creation module submitted to Steve Bennet and Mariana Lilleywuggi: Wuggi is a simple website that focuses on its users input to enrich its content. Discussions and feedback are always welcomed on Wuggi.YahtzeePC: A Yahtzee emulator for the PC.

    Read the article

  • How to handle wildly varying rendering hardware / getting baseline

    - by edA-qa mort-ora-y
    I've recently started with mobile programming (cross-platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high/low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide-show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before.

    Read the article

  • We have moved to larger offices

    - by Chris Houston
    First of all we should probably apologise for the complete lack of blogging over the last 6 months! As web developers we are constantly telling our clients that they should keep their blogs up to date and it seems we have been ignoring our own advice.That being said, we have been very busy moving offices and helping our new host QV Offices setup their new business. As well as all the moving we have not been sitting on our hands, we have built the new site for DairyMaster over in Ireland as well as a separate private website for their global distributor network.As Umbraco Gold Partners we have found more and more that we are working on projects where we are the silent development partners, so although we cannot talk publicly about a lot of the sites we develop, we have some real beauties now in our portfolio :)Now that the dust has settled in our new office ( and has been hovered up! ) we are read for the new year and are looking forward to working on some exciting projects that are currently in the pipeline.We are also intending to run some Hacking sessions for Umbraco as we now have lots of space for developers to come and work with us, so if you have any ideas of a theme for an Umbraco Hackathon then do let us know.And with that it just remains to say Happy Christmas to you all and see you in the new year!

    Read the article

  • Java Applet or Unity3D for Cross-Platform 3D Surveying App

    - by Jake M
    Do you think a Java Applet or Unity3D Application is the best option to make a cross-browser 3d web-app? I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. The application must work on Windows Desktop, Android, iOS and Windows Phone. So this is why I am tending towards a web-app as opposed to cross-platform smart phone library(like Mosync or Marmalade). The 3d environments will be navigable(by dragging around) and contain simple(not detailed) 3d objects like buildings, mountains, pipelines, etc. One thing I know is that WebGL is out because it doesn't work on IE and has limited support on Smart Phones(am I correct to completely disregard WebGL?). Will future Smart Phone browsers continue to support Java Applets? Also is it really true I can write ONE Application/Game in Unity3D and simply compile it to run on Windows Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android? Would you suggest the Unity3D application path or the Unity3D Web Player path? Concerning Unity3D, there's one thing I am unsure about: do all Unity3D features work on iOS and Android?

    Read the article

  • What relationship do software Scrum or Lean have to industrial engineering concepts like theory of constraints?

    - by DeveloperDon
    In Scrum, work is delivered to customers through a series of sprints in which project work is time boxed to a fixed number of days or weeks, usually 30 days. In lean software development, the goal is to deliver as soon as possible, permitting early feedback for the next iteration. Both techniques stress the importance of workflow in which software work product does not accumulate in development awaiting release at some future date. Both permit new or refined requirements and feedback from QA and customers to be acted on with as little delay as possible based on priority. A few years ago I heard a lecture where the speaker talked briefly about a family of concepts from industrial engineering called theory of constraints. In the factory, they use an operations model based on three components: drum, buffer, and rope. The drum synchronizes work product as it flows through the system. Buffers that protect the system by holding output from one stage as it waits to be consumed by the next. The rope pulls product from one work station to the next. Historically, are these ideas part of the heritage of Scrum and Lean, or are they on a separate track? It we wanted to think about Scrum and Lean in terms of drum-buffer-rope, what are the parts? Drum = {daily scrum meeting, monthly release)? Buffer = {burn down list, source control system)? Rope = { daily meeting, constant integration server, monthly releases}? Industrial engineers define work flow in terms of different kinds of factories. I-Factories: straight pipeline. One input, one output. A-Factories: many inputs and one output. V-Factories: one input, many output products. T-Plants: many inputs, many outputs. If it applies, what kind of factory is most like Scrum or Lean and why?

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

  • MySQL – Introduction to CONCAT and CONCAT_WS functions

    - by Pinal Dave
    MySQL supports two types of concatenation functions. They are CONCAT and CONCAT_WS CONCAT function just concats all the argument values as such SELECT CONCAT('Television','Mobile','Furniture'); The above code returns the following TelevisionMobileFurniture If you want to concatenate them with a comma, either you need to specify the comma at the end of each value, or pass comma as an argument along with the values SELECT CONCAT('Television,','Mobile,','Furniture'); SELECT CONCAT('Television',',','Mobile',',','Furniture'); Both the above return the following Television,Mobile,Furniture However you can omit the extra work by using CONCAT_WS function. It stands for Concatenate with separator. This is very similar to CONCAT function, but accepts separator as the first argument. SELECT CONCAT_WS(',','Television','Mobile','Furniture'); The result is Television,Mobile,Furniture If you want pipeline as a separator, you can use SELECT CONCAT_WS('|','Television','Mobile','Furniture'); The result is Television|Mobile|Furniture So CONCAT_WS is very flexible in concatenating values along with separate. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Clustering and custom applications

    - by Ahmed ilyas
    I was not entirely sure what tags to put but hope this is ok. This is just a general question in regards to clustering and applications: so lets say we have a clustered environment setup. We cluster SQL Server (I dont know exactly how its done but lets just say its been done for the sake of argument). Now if a website or application is trying to access that database for read/write (say an ASP.NET app or a C# Winforms app) and during that time SQL goes down - it takes a couple of minutes for the clustering failover to take affect to switch to another node. What happens during this time? I think it will time out/unable to connect. BUT is there a way for it to place the request in some pipeline so when the cluster node is back up/switched over it will continue as normal? as you can see, I know nothing much about clustering! what about your own custom .NET apps? Would there be a special way to develop them? I know that you can say create a simple Hello world app, and cluster that but they wouldnt be something you could see interms of the UI or anything, so they would effectively need to be developed as a Windows Service perhaps or even as a standard Console app which runs and not wait for user input but you wouldnt see any output from it (unless you redirect output to somewhere else) What im getting at here is... for those who have experience or developed a cluster application in .NET, how did you do it and what are the things to be aware of? For example we have the cloud service - fundamentally its built on clustering - if there is an outage, another node takes place and service is resumed as normal but we dont really see much of that downtime.

    Read the article

  • Effortlessly resize images in Orchard 1.7

    - by Bertrand Le Roy
    I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape. Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered: <img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/> Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved. The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time. Here is what the generated thumbnails look like on disk: In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale. This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

    Read the article

  • There is No Scrum without Agile

    - by John K. Hines
    It's been interesting for me to dive a little deeper into Scrum after realizing how fragile its adoption can be.  I've been particularly impressed with James Shore's essay "Kaizen and Kaikaku" and the Net Objectives post "There are Better Alternatives to Scrum" by Alan Shalloway.  The bottom line: You can't execute Scrum well without being Agile. Personally, I'm the rare developer who has an interest in project management.  I think the methodology to deliver software is interesting, and that there are many roles whose job exists to make software development easier.  As a project lead I've seen Scrum deliver for disciplined, highly motivated teams with solid engineering practices.  It definitely made my job an order of magnitude easier.  As a developer I've experienced huge rewards from having a well-defined pipeline of tasks that were consistently delivered with high quality in short iterations.  In both of these cases Scrum was an addition to a fundamentally solid process and a huge benefit to the team. The question I'm now facing is how Scrum fits into organizations withot solid engineering practices.  The trend that concerns me is one of Scrum being mandated as the single development process across teams where it may not apply.  And we have to realize that Scurm itself isn't even a development process.  This is what worries me the most - the assumption that Scrum on its own increases developer efficiency when it is essentially an exercise in project management. Jim's essay quotes Tobias Mayer writing, "Scrum is a framework for surfacing organizational dysfunction."  I'm unsure whether a Vice President of Software Development wants to hear that, reality nonwithstanding.  Our Scrum adoption has surfaced a great deal of dysfunction, but I feel the original assumption was that we would experience increased efficiency.  It's starting to feel like a blended approach - Agile/XP techniques for developers, Scrum for project managers - may be a better fit.  Or at least, a better way of framing the conversation. The blended approach. Technorati tags: Agile Scrum

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >