Search Results

Search found 31319 results on 1253 pages for 'source engine'.

Page 514/1253 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Right way of making muti-site and multi-lingual website on codeigniter

    - by DR.GEWA
    Hi there. Beforehand let me thank you all !! Really guys you help a lot. When I will finish my web site and will have much time on watching how userbase is growing I will come here again and again to answer to another people questions(if I can ) So here is the problem. I made a web-site on CodeIgniter. A social network engine. Something like phpfox, classmates_com or facebook. It's right now somehow not multilingual, So the UI strings are in the view files, and next step will be move them to the language files. I want the user to have ability to change the language. So I assume that in database user will have row "lang_local" which would be by default set to en, and then to any other language he will change . So what is eating my nervs and enery is following. I will make on this engine several demographic social networks,and I would like to manage theese web-sites in centralized manner with one backend . So whenever I would like to make a new web-network, I just add the domain settings install the script in new folder and add it in database sites I see it like this on every table in database like users,comments,messages,categories ,etc I will have a row site_id , and on each query add/update/delete I add a WHERE SITE_ID=XXX and in table sites(site_id,site_name,domain_name) will have all domains , so that in backend I can filter data by website. Is this a good way? What if i will need then to be multiserver, what about load balancing? Who can tell me what would be a right,PROFESSIONAL way? My maximum user limit for a database is something like for start 10.000 in one-two year 100.000users

    Read the article

  • Patterns for non-layered applications

    - by Paul Stovell
    In Patterns of Enterprise Application Architecture, Martin Fowler writes: This book is thus about how you decompose an enterprise application into layers and how those layers work together. Most nontrivial enterprise applications use a layered architecture of some form, but in some situations other approaches, such as pipes and filters, are valuable. I don't go into those situations, focussing instead on the context of a layered architecture because it's the most widely useful. What patterns exist for building non-layered applications/parts of an application? Take a statistical modelling engine for a financial institution. There might be a layer for data access, but I expect that most of the code would be in a single layer. Would you still expect to see Gang of Four patterns in such a layer? How about a domain model? Would you use OO at all, or would it be purely functional? The quote mentions pipes and filters as alternate models to layers. I can easily imagine a such an engine using pipes as a way to break down the data processing. What other patterns exist? Are there common patterns for areas like task scheduling, results aggregation, or work distribution? What are some alternatives to MapReduce?

    Read the article

  • Can I use a specific model from within a behavior in CakePHP?

    - by Paul Willy
    I'm trying to write a behavior that will give my models access to a simple workflow engine I've devised. The workflow engine itself works as a CakePHP model, with workflow data stored in the database just as any other model data is stored. Basically what I want to do is have the behavior use the workflow model whenever an action is called on the base model. For example, if the edit() action is executed for Posts, then the Post (with the behavior attached) will trigger the workflow behavior with its own model name, action, and id as arguments (e.g. [Post, edit, 1]). Then the behavior will invoke the functionality of the Workflow model, which has a record for what to do when edit is run on Posts (e.g. send e-mail to users who are subscribed to that post) and will carry that out. My question is, what is the proper way to invoke model/controller methods from within the behavior? The model to be used from within the behavior will always be Workflow, but the behavior should be usable from basically any model (aside from Workflow itself). I know I could run SQL queries directly from the behavior, but of course this is not the Cake way :-) Or, am I going about this in the wrong way? I want to store a certain amount of logic in the database so that it is easily configurable by different users, and not have endless configuration checks within the model/controller logic itself so that workflow steps can be easily added/changed/removed in the future.

    Read the article

  • How should I design my MYSQL table/s?

    - by yaya3
    I built a really basic php/mysql site for an architect that uses one 'projects' table. The website showcases various projects that he has worked on. Each project contained one piece of text and one series of images. Original projects table (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL auto_increment, `project_name` text, `project_text` text, `image_filenames` text, `image_folder` text, `project_pdf` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; The client now requires the following, and I'm not sure how to handle the expansions in my DB. My suspicion is that I will need an additional table. Each project now have 'pages'. Pages either contain... One image One "piece" of text One image and one piece of text. Each page could use one of three layouts. As each project does not currently have more than 4 pieces of text (a very risky assumption) I have expanded the original table to accommodate everything. New projects table attempt (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL AUTO_INCREMENT, `project_name` text, `project_pdf` text, `project_image_folder` text, `project_img_filenames` text, `pages_with_text` text, `pages_without_img` text, `pages_layout_type` text, `pages_title` text, `page_text_a` text, `page_text_b` text, `page_text_c` text, `page_text_d` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; In trying to learn more about MYSQL table structuring I have just read an intro to normalization and A Simple Guide to Five Normal Forms in Relational Database Theory. I'm going to keep reading! Thanks in advance

    Read the article

  • Grails Deployment - Fastest way to get deployed?

    - by gav
    Hi All, If anyone has or is running a Grails application on their server I would appreciate some details on where to go after creating the WAR. Background I chose grails because with Google App Engine and the App Engine Plugin deployment should have been trivial. This issue is that there is a bug which makes any application pretty much unusable, I wish this had been more prominent so I didn't have to get to the point of seeing the error myself before I was aware of it. The next option was EC2 and the Cloud Tools plugin, it seems Cloud Tools worked with grails 1.0 but doesn't work with the current 1.2.1 due to issues getting the JAR dependencies. It also seems that Cloud Tools has been succeeded by Cloud Foundry which is in beta, will cost extra money and has limited places (I signed up but haven't got an e-mail). Question My application is painfully trivial, it has a small load, small data requirements and doesn't need to scale past 5 users. How can I deploy my grails app as quickly and painlessly as possible? Specifically: Are there any hosting companies that have tomcat installed on their servers out of the box that I can sign up to and use that will just work? Do you know of any simple tutorials for getting a grails application deployed to EC2 without Cloud Tools? Thanks in advance, Gav Side-note: I picked grails because of good advice from SO, it should have been a very short time from development to deployed product except the tools for auto-deployment aren't that mature and I've never configured a server before.

    Read the article

  • eclipse helios tomcat error

    - by itsraja
    Hi, I just created a struts application in eclipse Helios. when I run as server I get an alert like this. My browser is online only. This document cannot be displayed while offline. To go online, uncheck Work Offline from the File menu. and this is error displayed. Dec 23, 2010 7:20:37 PM org.apache.catalina.core.AprLifecycleListener init SEVERE: An incompatible version 1.1.15 of the APR based Apache Tomcat Native library is installed, while Tomcat requires version 1.1.17 Dec 23, 2010 7:20:37 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:StrutsHelloWorld' did not find a matching property. Dec 23, 2010 7:20:37 PM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Dec 23, 2010 7:20:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1081 ms Dec 23, 2010 7:20:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Dec 23, 2010 7:20:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.29 Dec 23, 2010 7:20:38 PM org.apache.catalina.core.StandardContext filterStart SEVERE: Exception starting filter struts2 java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.FileDispatcher at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1645) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1491) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:269) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:115) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4001) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4651) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Dec 23, 2010 7:20:38 PM org.apache.catalina.core.StandardContext start SEVERE: Error filterStart Dec 23, 2010 7:20:38 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/StrutsHelloWorld] startup failed due to previous errors Thanks.

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • Case Insensitive Ternary Search Tree

    - by Yan Cheng CHEOK
    I had been using Ternary Search Tree for a while, as the data structure to implement a auto complete drop down combo box. Which means, when user type "fo", the drop down combo box will display foo food football The problem is, my current used of Ternary Search Tree is case sensitive. My implementation is as follow. It had been used by real world for around 1++ yeas. Hence, I consider it as quite reliable. My Ternary Search Tree code However, I am looking for a case insensitive Ternary Search Tree, which means, when I type "fo", the drop down combo box will show me foO Food fooTBall Here are some key interface for TST, where I hope the new case insentive TST may have similar interface too. /** * Stores value in the TernarySearchTree. The value may be retrieved using key. * @param key A string that indexes the object to be stored. * @param value The object to be stored in the tree. */ public void put(String key, E value) { getOrCreateNode(key).data = value; } /** * Retrieve the object indexed by key. * @param key A String index. * @return Object The object retrieved from the TernarySearchTree. */ public E get(String key) { TSTNode<E> node = getNode(key); if(node==null) return null; return node.data; } An example of usage is as follow. TSTSearchEngine is using TernarySearchTree as the core backbone. Example usage of Ternary Search Tree // There is stock named microsoft and MICROChip inside stocks ArrayList. TSTSearchEngine<Stock> engine = TSTSearchEngine<Stock>(stocks); // I wish it would return microsoft and MICROCHIP. Currently, it just return microsoft. List<Stock> results = engine.searchAll("micro");

    Read the article

  • How do you make a static sprite be a child of another sprite in cocos2D while using SpaceManager

    - by JJBigThoughts
    I have two static (STATIC_MASS) SpaceManager sprites. One is a child of the other - by which I mean that one sort of builds up the other one, but although the child's images shows up in the right place, the child doesn't seem to exists in the chipmunk physics engine, like I would expect. In my case, I have a backboard (rectangular sprite) and a hoop (a circular sprite). Since I might want to move the backboard, I'd like to attach the hoop to backboard so that the hoop automatically moves right along with the backboard. Here, we see a rotating backboard with attached hoop. It looks OK on the screen, but other objects only bounce off the backboard but pass right through the hoop (in a bad sense of the term). What doesn't my child sprite seem to exist in the physics engine? // Add Backboard cpShape *shapeRect = [smgr addRectAt:cpvWinCenter mass:STATIC_MASS width:200 height:10 rotation:0.0f ];// We're upgrading this cpCCSprite * cccrsRect = [cpCCSprite spriteWithShape:shapeRect file:@"rect_200x10.png"]; [self addChild:cccrsRect]; // Spin the static backboard: http://stackoverflow.com/questions/2691589/how-do-you-make-a-sprite-rotate-in-cocos2d-while-using-spacemanager // Make static object update moves in chipmunk // Since Backboard is static, and since we're going to move it, it needs to know about spacemanager so its position gets updated inside chipmunk. // Setting this would make the smgr recalculate all static shapes positions every step // cccrsRect.integrationDt = smgr.constantDt; // cccrsRect.spaceManager = smgr; // Alternative method: smgr.rehashStaticEveryStep = YES; smgr.rehashStaticEveryStep = YES; // Spin the backboard [cccrsRect runAction:[CCRepeatForever actionWithAction: [CCSequence actions: [CCRotateTo actionWithDuration:2 angle:180], [CCRotateTo actionWithDuration:2 angle:360], nil] ]]; // Add the hoop cpShape *shapeHoop = [smgr addCircleAt:ccp(100,-45) mass:STATIC_MASS radius: 50 ]; cpCCSprite * cccrsHoop = [cpCCSprite spriteWithShape:shapeHoop file:@"hoop_100x100.png"]; [cccrsRect addChild:cccrsHoop]; This is only half working for me. Note: SpaceManager is a toolkit for working with cocos2D-iphone

    Read the article

  • Finds in Rails 3 and ActiveRelation

    - by TheDelChop
    Guys, I'm trying to understand the new arel engine in Rails 3 and I've got a question. I've got two models, User and Task class User < ActiveRecord::Base has_many :tasks end class Task < ActiveRecord::Base belongs_to :user end here is my routes to imply the relation: resources :users do resources :tasks end and here is my Tasks controller: class TasksController < ApplicationController before_filter :load_user def new @task = @user.tasks.new end private def load_user @user = User.where(:id => params[:user_id]) end end Problem is, I get the following error when I try to invoke the new action: NoMethodError: undefined method `tasks' for #<ActiveRecord::Relation:0x3dc2488> I am sure my problem is with the new arel engine, does anybody understand what I'm doing wrong? Sorry guys, here is my schema.db file: ActiveRecord::Schema.define(:version => 20100525021007) do create_table "tasks", :force => true do |t| t.string "name" t.integer "estimated_time" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end create_table "users", :force => true do |t| t.string "email", :default => "", :null => false t.string "encrypted_password", :limit => 128, :default => "", :null => false t.string "password_salt", :default => "", :null => false t.string "reset_password_token" t.string "remember_token" t.datetime "remember_created_at" t.integer "sign_in_count", :default => 0 t.datetime "current_sign_in_at" t.datetime "last_sign_in_at" t.string "current_sign_in_ip" t.string "last_sign_in_ip" t.datetime "created_at" t.datetime "updated_at" t.string "username" end add_index "users", ["email"], :name => "index_users_on_email", :unique => true add_index "users", ["reset_password_token"], :name => "index_users_on_reset_password_token", :unique => true add_index "users", ["username"], :name => "index_users_on_username", :unique => true end Thank you, Joe

    Read the article

  • Mysql return value as 0 in the fetch result.

    - by Karthik
    I have this two tables, -- -- Table structure for table `t1` -- CREATE TABLE `t1` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `pname` varchar(20) collate latin1_general_ci NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t1` -- INSERT INTO `t1` VALUES ('p1', 'pro1'); INSERT INTO `t1` VALUES ('p2', 'pro2'); -- -------------------------------------------------------- -- -- Table structure for table `t2` -- CREATE TABLE `t2` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `year` int(6) NOT NULL, `price` int(3) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t2` -- INSERT INTO `t2` VALUES ('p1', 2009, 50); INSERT INTO `t2` VALUES ('p1', 2010, 60); INSERT INTO `t2` VALUES ('p3', 2007, 200); INSERT INTO `t2` VALUES ('p4', 2008, 501); my query is, SELECT * FROM `t1` LEFT JOIN `t2` ON t1.pid = t2.pid Getting the result, pid pname pid year price p1 pro1 p1 2009 50 p1 pro1 p1 2010 60 p2 pro2 NULL NULL NULL My question is, i want to get the price value is 0 instead of NULL. How can i write the query to getting the price value is 0. Thanks in advance for help.

    Read the article

  • Max Daily Budget exceeded and Billing Status "Changing Daily Budget"

    - by draftpik
    We've exceeded the Max Daily Budget for our app, but we can't increase the budget due to a serious flaw in Google's billing system. Google App Engine and Google Wallet do not have very capable support for multiple sign-in. As a result, I went to change the budget, but it used the wrong Google Wallet account (a different Google Account I was signed in as). I had to go back and try again, but now our GAE app shows the following status: Billing Status: Changing Daily Budget Your account has been locked while we process your budget changes. If you were redirected to Google Checkout but did not complete the process, your settings will remain unchanged. (You will be able to make changes to your budget settings again once the outstanding payment is processed.) Now I'm completely prevented from making any billing changes, our app is shut off (over quota), and I have NOTHING I can do to fix it. This is a seriously fundamental flaw in App Engine's billing system and Google Wallet integration. Has anyone run into this before? Is there a workaround anyone is aware of? Right now, our production app is completely down thanks to this issue. Any help you can offer would be greatly appreciated? If you're from Google and you might be able to help on the backend, our app id is "nhldraftpik". Thanks! Brian

    Read the article

  • Declaration, allocation and assignment of an array of pointers to function pointers

    - by manneorama
    Hello Stack Overflow! This is my first post, so please be gentle. I've been playing around with C from time to time in the past. Now I've gotten to the point where I've started a real project (a 2D graphics engine using SDL, but that's irrelevant for the question), to be able to say that I have some real C experience. Yesterday, while working on the event system, I ran into a problem which I couldn't solve. There's this typedef, //the void parameter is really an SDL_Event*. //but that is irrelevant for this question. typedef void (*event_callback)(void); which specifies the signature of a function to be called on engine events. I want to be able to support multiple event_callbacks, so an array of these callbacks would be an idea, but do not want to limit the amount of callbacks, so I need some sort of dynamic allocation. This is where the problem arose. My first attempt went like this: //initial size of callback vector static const int initial_vecsize = 32; //our event callback vector static event_callback* vec = 0; //size static unsigned int vecsize = 0; void register_event_callback(event_callback func) { if (!vec) __engine_allocate_vec(vec); vec[vecsize++] = func; //error here! } static void __engine_allocate_vec(engine_callback* vec) { vec = (engine_callback*) malloc(sizeof(engine_callback*) * initial_vecsize); } First of all, I have omitted some error checking as well as the code that reallocates the callback vector when the number of callbacks exceed the vector size. However, when I run this code, the program crashes as described in the code. I'm guessing segmentation fault but I can't be sure since no output is given. I'm also guessing that the error comes from a somewhat flawed understanding on how to declare and allocate an array of pointers to function pointers. Please Stack Overflow, guide me.

    Read the article

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Offline Mapping API

    - by Aaron M
    Are there any services available that allow me to manipulate maps in an offline setting? I am working on a project that requires me to take a map and based on features on the map, generate a game world. I have looked at a few of the API's for different providers: Google, ms, etc. The API's I looked seem to be strictly showing a user a map. I am looking for something that allows me to create a derivative of a map (the Gameworld), that will never be seen by the public, and is only used by the game engine. However one caveat is that I would like to be able to link the derivative created for use by the game engine, with something I can show the user. As an example. Think of a cross country racing sim. Users cannot control the vehicles directly in this game, they can only control the cars setup, driver, etc. I create a gameworld from a map. The gameworld data (driver position, etc) is overlayed onto a real map. A race might last several days. The only interaction users have with the real map is viewing their position on the map, and where they are in relation to the others. I don't want to violate the terms of the API here. I read Googles API TOS, and it seems to me that creating the gameworld would violdate their TOS. The features I really need are the following The ability to locate a specific place on the map by lat/long The ability/rights to grab those maps and save them as an image file temporarily for processing The ability/rights to store a gameworld that is based on the real map The ability to show a user a map with an overlay (this is optional. I can use googles API, or any other one that supports lat/long.)

    Read the article

  • Getting the most recent post based on date

    - by camcim
    Hi guys, How do I go about displaying the most recent post when I have two tables, both containing a column called creation_date This would be simple if all I had to do was get the most recent post based on posts created_on value however if a post contains replies I need to factor this into the equation. If a post has a more recent reply I want to get the replies created_on value but also get the posts post_id and subject. The posts table structure: CREATE TABLE `posts` ( `post_id` bigint(20) unsigned NOT NULL auto_increment, `cat_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `subject` tinytext NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `status` varchar(10) NOT NULL default 'INACTIVE', `private_post` varchar(10) NOT NULL default 'PUBLIC', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`post_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; The replies table structure: CREATE TABLE `replies` ( `reply_id` bigint(20) unsigned NOT NULL auto_increment, `post_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `notify` varchar(5) NOT NULL default 'YES', `status` varchar(10) NOT NULL default 'INACTIVE', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`reply_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; Here is my query so far. I've removed my attempt of extracting the dates. $strQuery = "SELECT posts.post_id, posts.created_on, replies.created_on, posts.subject "; $strQuery = $strQuery."FROM posts ,replies "; $strQuery = $strQuery."WHERE posts.post_id = replies.post_id "; $strQuery = $strQuery."AND posts.cat_id = '".$row->cat_id."'";

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • Alter Dilemma : How to use to set Primary and other attributes.

    - by Rachel
    I have following table in database AND I need to alter it to below mentioned schema. Initially I was drop the current database and creating new one using the create but I am not supposed to do that and use ALTER but am not sure as to how can I use ALTER to add primary key and other constraints. Any Suggestions !!! Code Current: CREATE TABLE `details` ( `KEY` varchar(255) NOT NULL, `ID` bigint(20) NOT NULL, `CODE` varchar(255) NOT NULL, `C_ID` bigint(20) NOT NULL, `C_CODE` varchar(64) NOT NULL, `CCODE` varchar(255) NOT NULL, `TCODE` varchar(255) NOT NULL, `LCODE` varchar(255) NOT NULL, `CAMCODE` varchar(255) NOT NULL, `OFCODE` varchar(255) NOT NULL, `OFNAME` varchar(255) NOT NULL, `PRIORITY` bigint(20) NOT NULL, `STDATE` datetime NOT NULL, `ENDATE` datetime NOT NULL, `INT` varchar(255) NOT NULL, `PHONE` varchar(255) NOT NULL, `TV` varchar(255) NOT NULL, `MTV` varchar(255) NOT NULL, `TYPE` varchar(255) NOT NULL, `CREATED` datetime NOT NULL, `MAIN` varchar(255) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Desired: CREATE TABLE `details` ( `id` bigint(20) NOT NULL, `code` varchar(255) NOT NULL, `cid` bigint(20) NOT NULL, `ccode` varchar(64) NOT NULL, `c_code` varchar(255) NOT NULL, `tcode` varchar(255) NOT NULL, `lcode` varchar(255) NOT NULL, `camcode` varchar(255) NOT NULL, `ofcode` varchar(255) NOT NULL, `ofname` varchar(255) NOT NULL, `priority` bigint(20) NOT NULL, `stdate` datetime NOT NULL, `enddate` datetime NOT NULL, `list` varchar(255) NOT NULL, `name` varchar(255) NOT NULL, `created` datetime NOT NULL, `date` datetime NOT NULL, `ofshn` int(20) NOT NULL, `ofcl` int(20) NOT NULL, `ofr` int(20) NOT NULL, PRIMARY KEY (`code`,`ccode`,`list`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Thanks !!!

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • Validate HAML from ActiveRecord: scope/controller/helpers for link_to etc?

    - by Chris Boyle
    I like HAML. So much, in fact, that in my first Rails app, which is the usual blog/CMS thing, I want to render the body of my Page model using HAML. So here is app/views/pages/_body.html.haml: .entry-content= Haml::Engine.new(body, :format => :html5).render ...and it works (yay, recursion). What I'd like to do is validate the HAML in the body when creating or updating a Page. I can almost do that, but I'm stuck on the scope argument to render. I have this in app/models/page.rb: validates_each :body do |record, attr, value| begin Haml::Engine.new(value, :format => :html5).render(record) rescue Exception => e record.errors.add attr, "line #{(e.respond_to? :line) && e.line || 'unknown'}: #{e.message}" end end You can see I'm passing record, which is a Page, but even that doesn't have a controller, and in particular doesn't have any helpers like link_to, so as soon as a Page uses any of that it's going to fail to validate even when it would actually render just fine. So I guess I need a controller as scope for this, but accessing that from here in the model (where the validator is) is a big MVC no-no, and as such I don't think Rails gives me a way to do it. (I mean, I suppose I could stash a controller in some singleton somewhere or something, but... excuse me while I throw up.) What's the least ugly way to properly validate HAML in an ActiveRecord validator?

    Read the article

  • DBTransactions between stateless calls using GUIDs

    - by Marty Trenouth
    I'm looking to add transactional support to my DB engine and providing to Abstract Transaction Handling down to passing in Guids with the DB Action Command. The DB engine would run similar to: private static Database DB; public static Dictionary<Guid,DBTransaction> Transactions = new ...() public static void DoDBAction(string cmdstring,List<Parameter> parameters,Guid TransactionGuid) { DBCommand cmd = BuildCommand(cmdstring,parameters); if(Transactions.ContainsKey(TransactionGuid)) cmd.Transaction = Transactions[TransactionGuid]; DB.ExecuteScalar(cmd); } public static BuildCommand(string cmd, List<Parameter> parameters) { // Create DB command from EntLib Database and assign parameters } public static Guid BeginTransaction() { // creates new Transaction adding it to "Transactions" and opens a new connection } public static Guid Commit(Guid g) { // Commits Transaction and removes it from "Transactions" and closes connection } public static Guid Rollback(Guid g) { // Rolls back Transaction and removes it from "Transactions" and closes connection } The Calling system would run similar to: Guid g try { g = DBEngine.BeginTransaction() DBEngine.DoDBAction(cmdstring1, parameters,g) // do some other stuff DBEngine.DoDBAction(cmdstring2, parameters2,g) // sit here and wait for a response from other item DBEngine.DoDBAction(cmdstring3, parameters3,g) DBEngine.Commit(g) } catch(Exception){ DBEngine.Rollback(g);} Does this interfere with .NET connection pooling (other than a connection be accidently left open)? Will EntLib keep the connection open until the commit or rollback?

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >