Search Results

Search found 13664 results on 547 pages for 'storage engine'.

Page 80/547 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Bulk get of child entities on Google app engine?

    - by dfrankow
    On Google App Engine in Python, I have a Visit entity that has a parent of Patient. A Patient may have multiple visits. I need to set the most_recent_visit (and some auxiliary visit data) somewhere for later querying, probably in another child entity that Brett Slatkin might call a "relationship index entity." I wish to do so in a bulk style as follows: 1. get 100 Patient keys 2. get all Visits that have any of the Patients from 1 as an ancestor 3. go through the Visits and get the latest for each Patient Is there any way to perform step 2 in a bulk get? I know there is a way to bulk get entities from a list of keys, and there is a way to match a single entity by its ancestor.

    Read the article

  • Google App Engine update an object from servlet not working ?

    - by Frank
    I use the following code to update an object from servlet in Google App Engine : String Time_Stamp=Get_Date_Format(6),query="select from "+Contact_Info_Entry.class.getName()+" where Contact_Id == '"+Contact_Id+"' order by Contact_Id desc"; PersistenceManager pm=null; try { pm=PMF.get().getPersistenceManager(); // note that this returns a list, there could be multiple, DataStore does not ensure uniqueness for non-primary key fields List<Contact_Info_Entry> results=(List<Contact_Info_Entry>)pm.newQuery(query).execute(); Contact_Info_Entry A_Contact_Entry=results.get(0); A_Contact_Entry.Extra_10=Time_Stamp; pm.makePersistent(A_Contact_Entry); } catch (Exception e) { Send_Email(Email_From,Email_To,"Check_License_Servlet Error [ "+Time_Stamp+" ]",new Text(e.toString()+"\n"+Get_Stack_Trace(e)),null); } finally { pm.close(); } The value "[ 2010-05-13 Thu 15:58:31 ]" was in A_Contact_Entry.Extra_10, but it seems "pm.makePersistent(A_Contact_Entry);" was not executed. The object was not updated and there was no error message, why ? How to fix it ?

    Read the article

  • Is there a graphics/game engine that supports PC & Mac?

    - by Chris Masterton
    Is their a graphics that runs on both Mac & PC? I've seen Unity and thats a possibility, I'm wondering if there are any other choices. Ideally I want to port the same C++ game code to both PC & Mac platforms, but let the underlying game/graphics engine take advantage of the appropriate hardware. edit: I'm looking on the level of Torque, Gamebryo & Unreal. A commercial solution is perfectly acceptable. Thanks, Chris

    Read the article

  • Google App Engine - About how much quota does a single datastore put use?

    - by Spines
    The latency for a datastore put is about 150ms (http://code.google.com/status/appengine/detail/datastore/2010/03/11#ae-trust-detail-datastore-put-latency). About how much CPUTime is used by a single datastore put with data size of 100 bytes, into an entity that has only 2 columns, and no indexes? I plan to do some testing with this later today to figure it out, but if anyone already knows that would help me out :). Also, does anyone know about how much extra overhead in CPUTime doing this datastore put through the task queue would be? Note: This is kind of a follow up to this question: http://stackoverflow.com/questions/2421075/google-app-engine-how-reliable-are-the-logs.

    Read the article

  • How to set a complex custom crontab in google-app-engine (java)?

    - by Shreeni
    I am building an app for which I need to set up cron jobs. What I want to do is to set the specific minutes in a hour where specific crons should run. For instance: Task1 at 1st minute of the hour Task2 on every second minute of the hour Task3 every 2 minute only in the second half of the hour Building this in the standard Unix cron format is reasonably straightforward, but could not figure out how to do it in the Google-App-Engine. The documentation does not list any non-trivial examples. Any suggestions on how to do it? Examples would be excellent.

    Read the article

  • Is there a good open source search engine including indexing bot which can be used to make up speci

    - by Skuta
    Hello, Our application (C#/.NET) needs a lot of queries to search. Google's 50,000 policy per day is not enough. We need something that would crawl Internet websites by specific rules we set (for ex. country domains) and gather URLs, Texts, keywords, name of websites and create our own internal catalogue so we wouldn't be limited to any massive external search engine like Google or Yahoo. Is there any free open source solution we could use to install it on our server? No point in re-inventing the wheel.

    Read the article

  • Python wrapper for Google Maps making Google App Engine crash?

    - by user1679332
    I am currently trying to build a web app using Google App Engine that will involve using the Google Maps API. Since I am coding in python, I tried importing the python wrapper for Google Maps (found here); however, performing the import will cause my web app to crash. Are there any suggestions for how I can fix this problem? I'm guessing the crash might have something to do with the fact that I need to incorporate the google maps python wrapper into my application, but how do I go about doing that? Thanks!

    Read the article

  • How can I receive percent encoded slashes with Django on App Engine?

    - by J. Frankenstein
    I'm using Django with Google's App Engine. I want to send information to the server with percent encoded slashes. A request like http:/localhost/turtle/waxy%2Fsmooth that would match against a URL like r'^/turtle/(?P<type>([A-Za-z]|%2F)+)$'. The request gets to the server intact, but sometime before it is compared against the regex the %2F is converted into a forward slash. What can I do to stop the %2Fs from being converted into forward slashes? Thanks!

    Read the article

  • app engine's back referencing is too slow. How can I make it faster?

    - by Ray Yun
    Google app engine has smart feature named back references and I usually iterate them where the traditional SQL's computed column need to be used. Just imagine that need to accumulate specific force's total hp. class Force(db.Model): hp = db.IntegerProperty() class UnitGroup(db.Model): force = db.ReferenceProperty(reference_class=Force,collection_name="groups") hp = db.IntegerProperty() class Unit(db.Model): group = db.ReferenceProperty(reference_class=UnitGroup,collection_name="units") hp = db.IntegerProperty() When I code like following, it was horribly slow (almost 3s) with 20 forces with single group - single unit. (I guess back-referencing force reload sub entities. Am I right?) def get_hp(self): hp = 0 for group in self.groups: group_hp = 0 for unit in group.units: group_hp += unit.hp hp += group_hp return hp How can I optimize this code? Please consider that there are more properties should be computed for each force/unit-groups and I don't want to save these collective properties to each entities. :)

    Read the article

  • What is the most efficient way to pass data (list of pairs of [Integer + Double]) between two Google App Engine instances?

    - by ruslan
    What is the most efficient way to pass data (list of pairs of [Integer, Double]) between two Google App Engine instances ? Currently I use Java binary serialization. Frontend servlet receives data from the client in JSON format. I convert it to byte[] using ObjectOutput.writeObject and then send it to backend servlet via HTTP POST. It's not in production yet. Should I just pass client's JSON as it is to backend? It seems more logical. But it's bigger in size. Or should I use Google Protocol Buffers as stated in this benchmark article ? Thank you!!!

    Read the article

  • How to get google app engine logs in C#?

    - by Max
    I am trying to retrieve app engine logs the only result I get is "# next_offset=None", below is my code: internal string GetLogs() { string result = _connection.Get("/api/request_logs", GetPostParameters(null)); return result; } private Dictionary<string, string> GetPostParameters(Dictionary<string, string> customParameters) { Dictionary<string, string> parameters = new Dictionary<string, string>() { { "app_id", _settings.AppId }, { "version", _settings.Version.ToString() } }; if (customParameters != null) { foreach (string key in customParameters.Keys) { if (parameters.ContainsKey(key)) { parameters[key] = customParameters[key]; } else { parameters.Add(key, customParameters[key]); } } } return parameters; }

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Getting a job in the games industry as a developer, just knowing a game engine

    - by numerical25
    I recently enrolled in a community college for games developement. But I am skeptical about the curriculum. I have no experience in the gaming industry so I wouldn't be able to tell whether it's a good investment or not. So I am asking you. I don't want to get too much into the details of all the classes I am taking so I will try to be brief. By the time I graduate, I should have a understanding of how a game engine works. I will be working with the Unreal Engine to develop a Multiplayer game from scratch. So in the process of my final project, I will learn how to work within the Unreal Engine, learn Python and learn how to use its API to connect to a remote server and build game mechanics. Overall I will also recieve an associates degree in game development. I learn C++ but not C. The director said he was trying to implement C in the program as well. What I notice is I will not learn how to build a 3D game engine from scratch. They do not teach any artificial intelligence (AI). I will not learn how to work with the graphics card using a graphics API such as DirectX or OpenGL. I know building a game engine from scratch is a little complex, but at the same time the track is requiring me to take some advanced mathematics courses such as calculus and geometry 1 and 2. I also got to take a physics class. I just think that's a little much for just learning how to use the Unreal Engine but not actually build one or try to learn the anatomy of a games engine. Is this good enough to possibly land my a job in the industry? If I left anything out or was not detail, please feel free to ask more questions. Edit: I do learn data structures and algorithms.

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • Problems with Level Architect, Citrus Engine, Flash

    - by Idan
    I am using the Citrus Engine to make a Flash game, and the Level Architect doesn't work well for me. Firstly, when I first launch it and open my project and my level, nothing is shown, no assets and not anything I have previously done with my level. To fix it, I open another project. The other project works fine, meaning I can see the assets and the level. Then I go back to the actual project I am working on, and the problem is fixed, only it does not fix the second problem: I can't add my own assests. I follow the manual and add tags like this: [Property(value="0")] But it doesn't change a thing in the level architect window (even after I close and reopen it). Any ideas? Thanks! Here's the code of the class I want to be shown in the Level Architect: package { import com.citrusengine.objects.PhysicsObject; import com.citrusengine.objects.platformer.Sensor; import flash.utils.clearTimeout; import flash.utils.setTimeout; /** * @author Aymeric */ public class Teleporter extends Sensor { [Property(value="0")] public var endX:Number=0; [Property(value="0")] public var endY:Number=0; public var object:PhysicsObject; [Property(value="0")] public var time:Number = 0; public var needToTeleport:Boolean = false; protected var _teleporting:Boolean = false; private var _teleportTimeoutID:uint; public function Teleporter(name:String, params:Object = null) { super(name, params); } override public function destroy():void { clearTimeout(_teleportTimeoutID); super.destroy(); } override public function update(timeDelta:Number):void { super.update(timeDelta); if (needToTeleport) { _teleporting = true; _teleportTimeoutID = setTimeout(_teleport, time); needToTeleport = false; } _updateAnimation(); } protected function _teleport():void { _teleporting = false; object.x = endX; object.y = endY; clearTimeout(_teleportTimeoutID); } protected function _updateAnimation():void { if (_teleporting) { _animation = "teleport"; } else { _animation = "normal"; } } } }

    Read the article

  • Is dynamic HTML layout good from an SEO perspective?

    - by sll
    Just wondering whether dynamically built HTML layout is fine from SEO perspectives? So let's assume e-commerce engine and its most popular page - products catalog. So 90% of the page is built using AJAX and MVVM library knockoutjs which builds HTML on the fly on the client side. So how search bots would parse such content? Is it fine indexed and would be such effective as server-side built HTML pages from the SEO perspectives?

    Read the article

  • Constraint-based expert systems design and development [on hold]

    - by Alex B.
    I would appreciate some recommendations & resources on design and development of expert systems, in particular, knowledge-based & constraint-based (not recommendation) systems. Ideally, your answers should consider the perspective (context) of using a SaaS business model and open source rules engine. How would you advise to address performance, scalability and other architectural criteria? Any other considerations on undertaking such project will be appreciated. Thanks much in advance!

    Read the article

  • Game physics / 2D Collision detection AS3

    - by Jery
    I know there are some methods you can use like hittestPoint and so on, but I want to see where my movieclip colliedes with another another movieclip. Any other methods I can use? by any chance does somebody know some a good introduction to game physics? Im asking because I coded a small engine and pretty much the whole code is spagetti code thats why I would like to know how you can setup something like this properly

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • Engine Rendering pipeline : Making shaders generic

    - by fakhir
    I am trying to make a 2D game engine using OpenGL ES 2.0 (iOS for now). I've written Application layer in Objective C and a separate self contained RendererGLES20 in C++. No GL specific call is made outside the renderer. It is working perfectly. But I have some design issues when using shaders. Each shader has its own unique attributes and uniforms that need to be set just before the main draw call (glDrawArrays in this case). For instance, in order to draw some geometry I would do: void RendererGLES20::render(Model * model) { // Set a bunch of uniforms glUniformMatrix4fv(.......); // Enable specific attributes, can be many glEnableVertexAttribArray(......); // Set a bunch of vertex attribute pointers: glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, m->pCoords); // Now actually Draw the geometry glDrawArrays(GL_TRIANGLES, 0, m->vertexCount); // After drawing, disable any vertex attributes: glDisableVertexAttribArray(.......); } As you can see this code is extremely rigid. If I were to use another shader, say ripple effect, i would be needing to pass extra uniforms, vertex attribs etc. In other words I would have to change the RendererGLES20 render source code just to incorporate the new shader. Is there any way to make the shader object totally generic? Like What if I just want to change the shader object and not worry about game source re-compiling? Any way to make the renderer agnostic of uniforms and attributes etc?. Even though we need to pass data to uniforms, what is the best place to do that? Model class? Is the model class aware of shader specific uniforms and attributes? Following shows Actor class: class Actor : public ISceneNode { ModelController * model; AIController * AI; }; Model controller class: class ModelController { class IShader * shader; int textureId; vec4 tint; float alpha; struct Vertex * vertexArray; }; Shader class just contains the shader object, compiling and linking sub-routines etc. In Game Logic class I am actually rendering the object: void GameLogic::update(float dt) { IRenderer * renderer = g_application->GetRenderer(); Actor * a = GetActor(id); renderer->render(a->model); } Please note that even though Actor extends ISceneNode, I haven't started implementing SceneGraph yet. I will do that as soon as I resolve this issue. Any ideas how to improve this? Related design patterns etc? Thank you for reading the question.

    Read the article

  • What's the difference between Canvas and WebGL?

    - by gadr90
    I'm thinking about using CAAT as a part of a HTML5 game engine. One of it's features is the ability to render to Canvas and WebGL without changing anything in the client code. That is a good thing, but I haven't found precisely: what are the differences between those two technologies? I would specially like to know the differences of Canvas and WebGL in the following regards: Framerate Desktop browser support Mobile browser support Futureproofability (TM)

    Read the article

  • How can I stop my Jitter physics meshes being offset?

    - by ben1066
    I'm developing a C# game engine and have hit a snag trying to add physics. I'm using XNA for graphics and Jitter for physics. I am trying to split the XNA model into it's meshes, then create a ConvexHull for each mesh. I then attempt to combine those into a CompoundObject, this however isn't working and depending upon the model the meshes are offset by different amounts. This is the code I'm currently using and it gives me: Any ideas?

    Read the article

  • Is there any online programmer's community, focusing on core game development?

    - by kasperov
    I am looking for a stricktly/mostly programming oriented game community, focusing on core graphics, middleware, and research. Any suggestions? Edit: I am specifically looking for people/community/group, having expertise in core game engine design/programming, directx/opengl reservoir.(And specifically targetting the programming part only).(The platforms can be anything from PC to xbox360/ps3/wii and even 3ds.)

    Read the article

  • Google App Engine how to get an object from the servlet ?

    - by Frank
    I have the following class objects in Google App Engine's dadastore, I can see them from the "Datastore Viewer " : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class Contact_Info_Entry implements Serializable { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) Long Id; public static final long serialVersionUID=26362862L; String Contact_Id="",First_Name="",Last_Name="",Company_Name="",Branch_Name="",Address_1="",Address_2="",City="",State="",Zip="",Country=""; double D_1,D_2; boolean B_1,B_2; Vector<String> A_Vector=new Vector<String>(); public Contact_Info_Entry() { } ...... } How can my java applications get the object from a servlet url ? For instance if have an instance of Contact_Info_Entry who's Contact_Id is "ABC-123", and my App Id is : nm-java When my java program accesses the url : "http://nm-java.appspot.com/Check_Contact_Info?Contact_Id=ABC-123 How will the Check_Contact_Info servlet get the object from datastore and return it to my app ? public class Check_Contact_Info_Servlet extends HttpServlet { static boolean Debug=true; public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { } ... protected void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { doGet(request,response); } } Frank

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >