Search Results

Search found 11509 results on 461 pages for 'description logic'.

Page 271/461 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • What is the start point in game development? Where to start?

    - by Dragon
    I understand, I'm not unique with such a question, there are a lot of questions like this one. But I hope you'll take a minute and maybe can give me a piece of advice. I have an idea to develop games, but I don't know where is the start point in game development. The learning curve isn't as straight as in learning of a programming language, but I want to give it a try. I have some experience with OOP and programming in general. I know (not too deep) C#, Java programming languages. I searched info on where to start, read a lot of blogs, forums and so on. Once I decided "stop wandering around, just start develop a game" and I started. At the moment I have a console version of very simple game (RPS - rock-paper-scissors) developed with C#. It has different modes: "player vs cpu" and "player vs player". Some time later I looked at the code and decided that it should be refactored or even redeveloped from the scratch. And I thought that time "GUI is what I need. I can add logic later." And now I'm here. I've already decided to make RPS with GUI, then make multiplayer and so on. I'm not thinking about 3D now, 2D is enough. It doesn't matter what language to use: C# or Java, I found frameworks for both - XNA (C#) and Slick (Java). Both are good for 2D game development. But I know nothing about sprites, how to bind objects on the screen and so on. You can say "you don't need it for such simple game like RPS", but RPS is the beginning, I have some ideas like "Tower Defense" game... you know, everybody has ideas, wishes.... and this knowledge is useful and in some way obligatory. So what is the start point to achieve my plans, ideas, wishes? Where to start? Is it possible to make game development learning curve a little bit straight? Or there're ways that amateur and game development beginners use for years? Thank you for you answers and advise in advance. P.S Sorry for that this post turned out an essay, but I tried to express my wish to start acting. Hope I managed to do it.

    Read the article

  • Where and how to reference composite MVP components?

    - by Lea Hayes
    I am learning about the MVP (Model-View-Presenter) Passive View flavour of MVC. I intend to expose events from view interfaces rather than using the observer pattern to remove explicit coupling with presenter. Context: Windows Forms / Client-Side JavaScript. I am led to believe that the MVP (or indeed MVC in general) pattern can be applied at various levels of a user interface ranging from the main "Window" to an embedded "Text Field". For instance, the model to the text field is probably just a string whereas the model to the "Window" contains application specific view state (like a persons name which resides within the contained text field). Given a more complex scenario: Documentation viewer which contains: TOC navigation pane Document view Search pane Since each of these 4 user interface items are complex and can be reused elsewhere it makes sense to design these using MVP. Given that each of these user interface items comprises of 3 components; which component should be nested? where? who instantiates them? Idea #1 - Embed View inside View from Parent View public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // Unclear as to how model and presenter are injected... TocPane = new TocPaneView(); } protected ITocPaneView TocPane { get; private set; } } Idea #2 - Embed Presenter inside View from Parent View public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // This doesn't seem like view logic... var tocPaneModel = new TocPaneModel(); var tocPaneView = new TocPaneView(); TocPane = new TocPanePresenter(tocPaneModel, tocPaneView); } protected TocPanePresenter TocPane { get; private set; } } Idea #3 - Embed View inside View from Parent Presenter public class DocumentationViewer : Form, IDocumentationViewerView { ... // Part of IDocumentationViewerView: public ITocPaneView TocPane { get; set; } } public class DocumentationViewerPresenter { public DocumentationViewerPresenter(DocumentationViewerModel model, IDocumentationViewerView view) { ... var tocPaneView = new TocPaneView(); var tocPaneModel = new TocPaneModel(model.Toc); var tocPanePresenter = new TocPanePresenter(tocPaneModel, tocPaneView); view.TocPane = tocPaneView; } } Some better idea...

    Read the article

  • Oracle releases new Java Embedded products

    - by Henrik Stahl
    With less than one week to go to JavaOne 2012, we've spiced things up a little by releasing not one but two net new embedded Java products. This is an important step towards realizing the vision of Java as the standard platform for the Internet of Things that I outlined in a recent blog post. The two new products are: Java ME Embedded 3.2. Based on same code as the widely deployed Oracle Java Wireless Client for feature phones, this new product provides a Java ME implementation optimized for very small microcontroller-based devices and adds - among other things - a new Device Access API that enables interaction with peripherals common in edge devices such as various types of sensors. In addition to the new Java ME Embedded platform, we have also released an update of the Java ME SDK which adds support for the development of small embedded devices. Java Embedded Suite 7.0. This is an integrated middleware stack for embedded devices, incorporating Java SE Embedded and versions of JavaDB, GlassFish and a Web Services stack optimized for remote operation and small footprint. A typical Internet of Things (or M2M) infrastructure contains three types of compute nodes: The edge device which is typically a sensor or control point of some kind. These devices can be connected directly to a backend through a mobile network if they are installed in - for example - a remote vending machine; or, they can be part of a local short-range network and be connected to the backend through a more powerful gateway device. A gateway is the second type of compute node and acts as an aggregator and control point for a local network. A good example of this could be a generalized home Internet access point, or home gateway. Gateways are mostly using normal wall power and are used for multiple applications, deployed by multiple service providers. Finally, the last type of compute node is the normal enterprise or cloud backend. Java ME Embedded and Java Embedded Suite are perfect base software stacks for the edge devices and the gateway respectively, providing the Java promise of a platform independent runtime and a complete set of libraries as well as allowing a programmer to focus on the business logic rather than plumbing. We are very thrilled with these new releases that open up exciting opportunities for Java developers to extend services and enterprise applications in ways that will make organizations more efficient and touch our daily lives. To find out more, come to the JavaOne conference (for technical content) and to the Java Embedded @ JavaOne subconference (for business content). There will be plenty of cool demos showing complete end-to-end applications, provided by Oracle and our partners, as well as keynotes and numerous sessions where you can learn more about the technology and business opportunities.

    Read the article

  • Help for choosing a cost effective game server for Flash client

    - by Sapots Thomas
    I am developing a flash-based game primarily for desktops, to be hosted on facebook platform (like cityville, sims social etc). The gameplay doesn't involve real-time communication between players unlike an mmorpg. Here each player plays in his own world without any knowledge of other online players. I've written almost 95% of the game logic in actionscript on the client side. I used Smartfox Server pro on the server side (mostly used for getting data from the DB) and the entire server code is an extension written in java. I'm using json as the protocol for communication. Although I love smartfox server, as an indie, its tough for me to afford the unlimited users license. Morever its limited just to one machine. So I'm looking for an alternative to smartfox server now. The reason for choosing smartfox server earlier was to use the server properties supported by it. Server properties on smartfox server take advantage of the socket connection and are essentially server side objects in java which store some data for the player which he can change frequently during the game. And when he logs out of the game, the extension can write out the final state in the DB (I'm using MySQL). This significantly reduces the number of DB UPDATE/INSERT calls made during the game. I love the way this works since the data is secure as its on the server side and smartfox server is known to be scalable. (although I'm not sure whether this approach is used widely by gaming industry or not, since this is not an mmorpg, I'm putting all player in the lobby). So my question is whether any of the free and community supported servers like reddwarf, firebase, BlazeDS etc can provide a similar architecture so that I can use server properties without many code changes? EDIT : I am not insisting on the exact same feature (thats asking too much!), but atleast a viable messaging system on the server so that I can send actionscript objects from the client using json/binary so that its fast. OR maybe some completely different way to implement what I need here. Thanks in advance.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • How to shoot a triangle out of an asteroid which floats all of the way up to the screen?

    - by Holland
    I currently have an asteroid texture loaded as my "test player" for the game I'm writing. What I'm trying to figure out how to do is get a triangle to shoot from the center of the asteroid, and keep going until it hits the top of the screen. What happens in my case (as you'll see from the code I've posted), is that the triangle will show, however it will either be a long line, or it will just be a single triangle which stays in the same location as the asteroid moving around (that disappears when I stop pressing the space bar), or it simply won't appear at all. I've tried many different methods, but I could use a formula here. All I'm trying to do is write a space invaders clone for my final in C#. I know how to code fairly well, my formulas just need work is all. So far, this is what I have: Main Logic Code protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 1, 1); mAsteroid.Draw(mSpriteBatch); if (mIsFired) { mPositions.Add(mAsteroid.LastPosition); mRay.Fire(mPositions); mIsFired = false; mRay.Bullets.Clear(); mPositions.Clear(); } base.Draw(gameTime); } Draw Code public void Draw() { VertexPositionColor[] vertices = new VertexPositionColor[3]; int stopDrawing = mGraphicsDevice.Viewport.Width / mGraphicsDevice.Viewport.Height; for (int i = 0; i < mRayPos.Length(); ++i) { vertices[0].Position = new Vector3(mRayPos.X, mRayPos.Y + 5f, 10); vertices[0].Color = Color.Blue; vertices[1].Position = new Vector3(mRayPos.X - 5f, mRayPos.Y - 5f, 10); vertices[1].Color = Color.White; vertices[2].Position = new Vector3(mRayPos.X + 5f, mRayPos.Y - 5f, 10); vertices[2].Color = Color.Red; mShader.CurrentTechnique.Passes[0].Apply(); mGraphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, vertices, 0, 1); mRayPos += new Vector2(0, 1f); mGraphicsDevice.ReferenceStencil = 1; } }

    Read the article

  • It's called College.

    - by jeffreyabecker
    Today I saw yet another 'GUID vs int as your primary key' article. Like most of the ones I've read this was filled with technical misrepresentations and out-right fallices. Chef's famous line that "There's a time and a place for everything children" applies here. GUIDs have distinct advantages and disadvantages which should be considered when choosing a data type for the primary key. Fallacy 1: "Its easier" An integer data type(tinyint, smallint, int, bigint) is a better artifical key than a GUID because its easier to remember. I'm a firm believer that your artifical primary keys should be opaque gibberish. PK's are an implementation detail which should never be exposed to the user or relied on for business logic. If you want things to come back in an order, add and ORDER BY clause and SortOrder fields. If you want a human-usable look-up add a business key with a unique constraint. If you want to know what order things were inserted into a table add a timestamp. Fallacy 2: "Size Matters" For many applications, the size of the artifical primary key is going to be irrelevant. The particular article which kicked this post off stated repeatedly that joining against an int has better performance than joining against a GUID. In computer science the performance of your algorithm is always a function of the number of data points. This still holds true for databases. Unless your table is very large, the performance difference between an int and a guid probably isnt going to be mesurable let alone noticeable. My personal experience is that the performance becomes an issue when you start having billions of rows in the table. At this point, you should probably start looking to move from int to bigint so the effective space/performance gain isnt as much as you'd think. GUID Advantages: Insert-ability / Mergeability: You can reliably insert guids into tables without key collisions. Database Independence: Saving entities to the database often requires knowing ids. With identity based ids the id must be selected back after every insert. GUIDs can be generated application-side allowing much faster inserts. GUID Disadvantages: Generatability: You can calculate the next id for an integer pk pretty easily in your head but will need a program to generate GUIDs. Solution: "Select top 100 newid() from sysobjects" Fragmentation: most GUID generation algorithms generate pseudo random GUIDs. This can cause inserts into the middle of your clustered index. Solutions: add a default of newsequentialid() or use GuidComb in NHibernate.

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • Is ASP.NET MVC completely (and exclusively) based on conventions?

    - by Mike Valeriano
    --TL;DR Is there a "Hello World!" ASP.NET MVC tutorial out there that doesn't rely on conventions and "stock" projects? Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single "hello_world.asp" file or something (like in PHP)? Am I completely mistaken and I should be looking somewhere else, maybe this? I'm interested in the MVC framework, not Web Forms --Background I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my "desktop" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want but... not quite. Learning PHP was easy - and right form the start I could just create a "hello_world.php" file and just do something like this for immediate results: <!-- file: hello_world.php --> <?php> echo "Hello World!"; <?> But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the "Empty" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can in fact start from scratch, creating the project structure myself. It is not the case with PHP: I can choose from a plethora of different MVC frameworks I can just create my own framework I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)

    Read the article

  • Creating predefinied camera views - How do I move the camera to make sense while using Controller?

    - by Deukalion
    I'm trying to understand 3D but the one thing I can't seem to understand is the Camera. Right now I'm rendering four 3D Cubes with textures and I set the Project Matrix: public BasicCamera3D(float fieldOfView, float aspectRatio, float clipStart, float clipEnd, Vector3 cameraPosition, Vector3 cameraLookAt) { projection_fieldOfView = MathHelper.ToRadians(fieldOfView); projection_aspectRatio = aspectRatio; projection_clipstart = clipStart; projection_clipend = clipEnd; matrix_projection = Matrix.CreatePerspectiveFieldOfView(projection_fieldOfView, aspectRatio, clipStart, clipEnd); view_cameraposition = cameraPosition; view_cameralookat = cameraLookAt; matrix_view = Matrix.CreateLookAt(cameraPosition, cameraLookAt, Vector3.Up); } BasicCamera3D gameCamera = new BasicCamera3D(45f, GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000f, new Vector3(0, 0, 8), new Vector3(0, 0, 0)); This creates a sort of "Top-Down" camera, with 8 (still don't get the unit type here - it's not pixels I guess?) But, if I try to position the camera at the side to make "Side-View" or "Reverse Side View" camera, the camera is rotating to much until it's turned around it a couple of times. I render the boxes at: new Vector3(-1, 0, 0) new Vector3(0, 0, 0) new Vector3(1, 0, 0) new Vector3(1, 0, 1) and with the Top-Down camera it shows good, but I don't get how I can make the camera show the side or 45 degrees from top (Like 3rd person action games) because the logic doesn't make sense. Also, since every object you render needs a new BasicEffect with a new projection/view/world - can you still use the "same" camera always so you don't have to create a new View/Matrix and such for each object. It's seems weird. If someone could help me get the camera to navigate around my objects "naturally" so I can be able to set a few predtermined views to choose from it would be really helpful. Are there some sort of algorithm to calculate the view for this and perhaps not simply one value? Examples: Top-Down-View: I have an object at 0, 0, 0 when I turn the right stick on the Xbox 360 Controller it should rotate around that object kind of, not flip and turn upside down, disappear and then magically appear as a tiny dot somewhere I have no clue where it is like it feels like it does now. Side-View: I have an object at 0, 0, 0 when I rotate to sides or up and down, the camera should be able to show a little more of the periphery to each side (depending on which you look at), and the same while moving up or down.

    Read the article

  • Designing for an algorithm that reports progress

    - by Stefano Borini
    I have an iterative algorithm and I want to print the progress. However, I may also want it not to print any information, or to print it in a different way, or do other logic. In an object oriented language, I would perform the following solutions: Solution 1: virtual method have the algorithm class MyAlgoClass which implements the algo. The class also implements a virtual reportIteration(iterInfo) method which is empty and can be reimplemented. Subclass the MyAlgoClass and override reportIteration so that it does what it needs to do. This solution allows you to carry additional information (for example, the file unit) in the reimplemented class. I don't like this method because it clumps together two functionalities that may be unrelated, but in GUI apps it may be ok. Solution 2: observer pattern the algorithm class has a register(Observer) method, keeps a list of the registered observers and takes care of calling notify() on each of them. Observer::notify() needs a way to get the information from the Subject, so it either has two parameters, one with the Subject and the other with the data the Subject may pass, or just the Subject and the Observer is now in charge of querying it to fetch the relevant information. Solution 3: callbacks I tend to see the callback method as a lightweight observer. Instead of passing an object, you pass a callback, which may be a plain function, but also an instance method in those languages that allow it (for example, in python you can because passing an instance method will remain bound to the instance). C++ however does not allow it, because if you pass a pointer to an instance method, this will not be defined. Please correct me on this regard, my C++ is quite old. The problem with callbacks is that generally you have to pass them together with the data you want the callback to be invoked with. Callbacks don't store state, so you have to pass both the callback and the state to the Subject in order to find it at callback execution, together with any additional data the Subject may provide about the event is reporting. Question My question is relative to the fact that I need to implement the opening problem in a language that is not object oriented, namely Fortran 95, and I am fighting with my usual reasoning which is based on python assumptions and style. I think that in Fortran the concept is similar to C, with the additional trouble that in C you can store a function pointer, while in Fortran 95 you can only pass it around. Do you have any comments, suggestions, tips, and quirks on this regard (in C, C++, Fortran and python, but also in any other language, so to have a comparison of language features that can be exploited on this regard) on how to design for an algorithm that must report progress to some external entity, using state from both the algorithm and the external entity ?

    Read the article

  • After 10 Years, MySQL Still the Right Choice for ScienceLogic's "Best Network Monitoring System on the Planet"

    - by Rebecca Hansen
    ScienceLogic has a pretty fantastic network monitoring appliance.  So good in fact that InfoWorld gave it their "2013 Best Network Monitoring System on the Planet" award.  Inside their "ultraflexible, ultrascalable, carrier-grade" enterprise appliance, ScienceLogic relies on MySQL and has since their start in 2003.  Check out some of the things they've been able to do with MySQL and their reasons for continuing to use MySQL in these highlights from our new MySQL ScienceLogic case study. Science Logic's larger customers use their appliance to monitor and manage  20,000+ devices, each of which generates a steady stream of data and a workload that is 85% write. On a large system, the MySQL database: Averages 8,000 queries every second or about 1 billion queries a day Can reach 175,000 tables and up to 20 million rows in a single table Is 2 terabytes on average and up to 6 terabytes "We told our customers they could add more and more devices. With MySQL, we haven't had any problems. When our customers have problems, we get calls. Not getting calls is a huge benefit." Matt Luebke, ScienceLogic Chief Software Architect.? ScienceLogic was approached by a number of Big Data / NoSQL vendors, but decided against using a NoSQL-only solution. Said Matt, "There are times when you really need SQL. NoSQL can't show me the top 10 users of CPU, or show me the bottom ten consumer of hard disk. That's why we weren't interested in changing and why we are very interested in MySQL 5.6. It's great that it can do relational and key-value using memcached." The ScienceLogic team is very cautious about putting only very stable technology into their product, and according to Matt, MySQL has been very stable: "We've been using MySQL for 10 years and we have never had any reliability problems. Ever." ScienceLogic now uses SSDs for their write-intensive appliance and that change alone has helped them achieve a 5x performance increase. Learn more>> ScienceLogic MySQL Case Study MySQL 5.6 InnoDB Compression options for better SSD performance Tuning MySQL 5.6 for Great Product Performance - on demand webinar Developer and DBA Guide to MySQL 5.6 white paper Guide to MySQL and NoSQL: The Best of Both Worlds white paper

    Read the article

  • ArchBeat Link-o-Rama for 2012-07-11

    - by Bob Rhubart
    Is the future of retail showrooming? | GigaOm "The digital shopper isn’t just digital and she expects to be served seamlessly across all channels, physical and digital," reports GigaOm. Twenty years into the Internet era and the changes just keep coming. Solution architects take note... Agile Bureaucracy: When Practices become Principles | Jim Highsmith.com "Principles and values are a critical part of keeping individuals in organizations aligned and engaged," says Agile guru Jim Highsmith, "but the more pseudo-principles are piled on top of principles, the less and less organizations are able to adapt." Oracle Fusion Applications 11g Basics | Michel Schildmeijer "We are trying to build up a Oracle Fusion Apps environment on a Exalogic system, though still on bare metal, because officially there still is no Oracle VM available yet on Exalogic," says Michel Schildmeijer, an Oracle Fusion Middleware Architect at Qualogy. "It is a bit of a challenge, but getting to know the basics and which components the install, build and configure phase use, might bring you a step further on the way." Process Centric Banking: Loan Origination Solution | Manish Palaparthy This interesting, detailed post by Manish Palaparthy explains the process behind the execution of a proof-of-concept for a Fusion Middleware-based loan-origination solution for a bank. The solution incorporates Oracle BPM Suite, Webcenter, and ADF technolgies in a SOA infrastructure. How eBay and Facebook are Cleaning Up Data Centers | Amy Gallo - HBR The Cloud has needs! As reported by Amy Gallo in an article in the Harvard Business Review, "The electricity demand of data centers and the telecommunications network is rivaling that of most nations. If the cloud were itself a country, it would rank fifth in the world on energy demand behind the U.S., China, Russia, and Japan." Do WebLogic configuration from ANT | Edwin Biemond "With WebLogic WLST you can script the creation of all your Application DataSources or SOA Integration artifacts( like JMS etc)," says Oracle ACE Edwin Biemond. "This is necessary if your domain contains many WebLogic artifacts or you have more then one WebLogic environment. If so, you want to script this so you can configure a new WebLogic domain in minutes and you can repeat this task with always the same result." Oracle Special-Edition E-Book: Cloud Architecture for Dummies Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale with Cloud Architecture for Dummies, a free Oracle e-book. (Registration required.) Thought for the Day "One of the best things to come out of the home computer revolution could be the general and widespread understanding of how severely limited logic really is." — Frank Herbert Source: SoftwareQuotes.com

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • x axis detection issues platformer starter kit

    - by dbomb101
    I've come across a problem with the collision detection code in the platformer starter kit for xna.It will send up the impassible flag on the x axis despite being nowhere near a wall in either direction on the x axis, could someone could tell me why this happens ? Here is the collision method. /// <summary> /// Detects and resolves all collisions between the player and his neighboring /// tiles. When a collision is detected, the player is pushed away along one /// axis to prevent overlapping. There is some special logic for the Y axis to /// handle platforms which behave differently depending on direction of movement. /// </summary> private void HandleCollisions() { // Get the player's bounding rectangle and find neighboring tiles. Rectangle bounds = BoundingRectangle; int leftTile = (int)Math.Floor((float)bounds.Left / Tile.Width); int rightTile = (int)Math.Ceiling(((float)bounds.Right / Tile.Width)) - 1; int topTile = (int)Math.Floor((float)bounds.Top / Tile.Height); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / Tile.Height)) - 1; // Reset flag to search for ground collision. isOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { // If this tile is collidable, TileCollision collision = Level.GetCollision(x, y); if (collision != TileCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = Level.GetBounds(x, y); Vector2 depth = RectangleExtensions.GetIntersectionDepth(bounds, tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY < absDepthX || collision == TileCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (previousBottom <= tileBounds.Top) isOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == TileCollision.Impassable || IsOnGround) { // Resolve the collision along the Y axis. Position = new Vector2(Position.X, Position.Y + depth.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } //This is the section which deals with collision on the x-axis else if (collision == TileCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. Position = new Vector2(Position.X + depth.X, Position.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } } } } // Save the new bounds bottom. previousBottom = bounds.Bottom; }

    Read the article

  • Engine Rendering pipeline : Making shaders generic

    - by fakhir
    I am trying to make a 2D game engine using OpenGL ES 2.0 (iOS for now). I've written Application layer in Objective C and a separate self contained RendererGLES20 in C++. No GL specific call is made outside the renderer. It is working perfectly. But I have some design issues when using shaders. Each shader has its own unique attributes and uniforms that need to be set just before the main draw call (glDrawArrays in this case). For instance, in order to draw some geometry I would do: void RendererGLES20::render(Model * model) { // Set a bunch of uniforms glUniformMatrix4fv(.......); // Enable specific attributes, can be many glEnableVertexAttribArray(......); // Set a bunch of vertex attribute pointers: glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, m->pCoords); // Now actually Draw the geometry glDrawArrays(GL_TRIANGLES, 0, m->vertexCount); // After drawing, disable any vertex attributes: glDisableVertexAttribArray(.......); } As you can see this code is extremely rigid. If I were to use another shader, say ripple effect, i would be needing to pass extra uniforms, vertex attribs etc. In other words I would have to change the RendererGLES20 render source code just to incorporate the new shader. Is there any way to make the shader object totally generic? Like What if I just want to change the shader object and not worry about game source re-compiling? Any way to make the renderer agnostic of uniforms and attributes etc?. Even though we need to pass data to uniforms, what is the best place to do that? Model class? Is the model class aware of shader specific uniforms and attributes? Following shows Actor class: class Actor : public ISceneNode { ModelController * model; AIController * AI; }; Model controller class: class ModelController { class IShader * shader; int textureId; vec4 tint; float alpha; struct Vertex * vertexArray; }; Shader class just contains the shader object, compiling and linking sub-routines etc. In Game Logic class I am actually rendering the object: void GameLogic::update(float dt) { IRenderer * renderer = g_application->GetRenderer(); Actor * a = GetActor(id); renderer->render(a->model); } Please note that even though Actor extends ISceneNode, I haven't started implementing SceneGraph yet. I will do that as soon as I resolve this issue. Any ideas how to improve this? Related design patterns etc? Thank you for reading the question.

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Creating a Strong Bridge to the Post PC World

    - by Webgui
    Moving from location to location requires strong roads.  When crossing a barrier though, like a body of water or valley, we are required to build a strong bridge to get us from point A to point B in a way that is fast, safe, and easy.Yet we are not talking here about driving a car or riding a bus.  As we in the computing world are evidencing the move to the post-PC era, modernizing and migrating legacy applications to harness the power of HTML5 web, cloud and mobile is one of the most difficult challenges enterprises have faced.  Constant technological changes have weakened the business value of legacy systems, which have been developed over the years through huge investments.  There are several risks of course in this move.  Do you choose to simply rewrite code of legacy apps and transform them to HTML5 one by one?  This is quite expensive (according to research firm Gartner, the cost is $6 - $26 per line of code).  Of course, the pace of the rewriting process is very slow – around 170 lines per day for each developer – which slows down business productivity in a world in which no organization can afford to fall behind.  Other questions include whether the new cloud-based apps will have the same functionality as the trusted applications that worked for you for years.  How will the user experience be affected?  And of course, what about data security?  So we are faced with the challenge of building a sturdy bridge to stabilize our move in order to allow us to confidently and easily move our legacy applications into the post-PC era.   We at Gizmox are excited to release the first downloadable Community Technology Preview (CTP) of our Instant CloudMove Transposition Studio.Developers: To download the tool, and try it out for yourself, please visit http://www.visualwebgui.com/download.aspx.The CTP is the first and only tool-based solution allowing any Microsoft Visual Studio developer to extend VB6 and .NET enterprise client/server applications into HTML5 web, cloud and mobile applications, including the ability to upgrade their code and UI while doing so.   It is the only solution to fully replicate enterprise desktop applications behavior in the post-PC era.  With Instant CloudMove, the transposed application is available on any mobile or tablet device, browser and across any client operating system. Moreover, the extended application logic and data remains on the server behind the fire-wall and therefore the application’s front end is secured-by-design.   We would love for you to try out the tool for yourselves and let us know what you think.  How are you finding the move?

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • iTorque for a simple arcade game

    - by Herfus
    I have a basic understanding of programming, but I am no programmer. I've had a couple of a semesters with java programming, so we're talking pretty basic here. I have some scripting experience with game editors where I've created a few (simple) encounters, boss AI, abilities, events and so on. I've mostly done level design with UDK, Source and several other toolsets for a few years now, but I'd like to divert some of the focus to iphone-development. I've participated in a few development projects (source, udk, daot) where I've had a variety of roles (yet never beyond simple scripting). I have just finished prototyping an Iphone game (using game maker) and begun a bit more precise planning on what I'll have to do for the real version. The game is fairly simple, perhaps the best comparison in scope and complexity would be Doodlejump for iPhone. The reason I created the prototype was not just to answer a few questions about the gameplay, but to get some insight into what kind of problems I might face when trying to develop the real thing. I've been looking for engines that I can use for this. iTorque looks, so far, like the best option with a scripting language and WYSIWYG-editor. However the price is fairly steep and I'd like to prepare myself as much as possible before jumping into this, which is why I'm going to ask a few questions here. What kind of difficulties do you think I might run into, considering what you've read so far? Not just with torque, but development in general. I'm making this question mostly to have someone to reality check me. I usually achieve to do what I'm trying to do with scripting, but something tells me there's a very big difference between scripting an AI or an event and creating game logic. Will it be too much of a leap? Just how simple is it to use the Torque scripting language? Obviously I don't expect to be prepared, I expect it to be a learning process. However, I'd still like to be at least a bit confident on the time I'll have to dedicate to this first.

    Read the article

  • Regular Expression Transformation

    The regular expression transformation exposes the power of regular expression matching within the pipeline. One or more columns can be selected, and for each column an individual expression can be applied. The way multiple columns are handled can be set on the options page. The AND option means all columns must match, whilst the OR option means only one column has to match. If rows pass their tests then rows are passed down the successful match output. Rows that fail are directed down the alternate output. This transformation is ideal for validating data through the use of regular expressions. You can enter any expression you like, or select a pre-configured expression within the editor. You can expand the list of pre-configured expressions yourself. These are stored in a Xml file, %ProgramFiles%\Microsoft SQL Server\nnn\DTS\PipelineComponents\RegExTransform.xml, where nnn represents the folder version, 90 for 2005, 100 for 2008 and 110 for 2012. If you want to use regular expressions to manipulate data, rather than just validating it, try the RegexClean Transformation. The component is provided as an MSI file, however for 2005/200 you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Regular Expression Transformation in the Choose Toolbox Items window. Downloads The Regular Expression Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Regular Expression Transformation for SQL Server 2005 Regular Expression Transformation for SQL Server 2008 Regular Expression Transformation for SQL Server 2012 Version History SQL Server 2012Version 2.0.0.87 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008Version 2.0.0.87 - Release for SQL Server 2008 Integration Services. (10 Oct 2008) SQL Server 2005 Version 1.1.0.93 - Added option for you to choose AND or OR logic when multiple columns have been selected. Previously behaviour was OR only. (31 Jul 2008) Version 1.0.0.76 - Installer update and improved exception handling. (28 Jan 2008) Version 1.0.0.41 - Update for user interface stability fixes. (2 Aug 2006) Version 1.0.0.24 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.9 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshots  

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >