Search Results

Search found 27083 results on 1084 pages for 'having'.

Page 454/1084 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Is there a way communicate or measure levels of abstraction?

    - by hydroparadise
    I'll be the first to say that this question is a bit... out there. But here are a couple questions I bear in mind : Is abstraction continuous or discrete? Is there a single unit of abstraction? But I'm not sure those questions are truly answerable or even really makes sence. My naive answer would be something along the lines of abitrarily discrete but not necescarily having a single unit measure. Here's what I mean... Take a Black Labrador; an abstraction that could be made is that a Black Lab is a type of animal. [Animal]<--[Black Lab] A Black Lab is also a type of Dog. [Dog]<--[Black Lab] One way to establish a degree of abstraction is by comparing the two the abstractions. We could say that [Animal] is more abstract than [Dog] in respect to a Black Lab. It just so happens [Animal] can also be used as an abstraction of [Dog] So, we might end up with something like [Animal]<--[Dog]<--[Black Lab] With the model above, one might be inclined to say that there's two hops of abstraction to get from [Black Lab] to [Animal]. But you can't exactly tell somebody they need one level abstraction and reasonalby expect they will come up with [Dog] given they aren't explicity given the options above. If I needed to tell someobody in a single email that they needed an abstract class with out knowing what that abstract class is, is there a way to communaticate a degree of abstraction such that they might end up on Dog instead of Animal? As a side note, what area of study might this type of analysis fall under?

    Read the article

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

  • Friday Fun: Wake Up the Box

    - by Mysticgeek
    Another Friday and it’s time to waste the rest of your Friday playing a  fun flash game online. Today we take a look at a relaxing physic based puzzle game called Wake Up the Box. Wake Up the Box This goal of this game is to wake up the box character by attaching parts of existing wood objects in each stage. You can start a new game or continue your progress from where you left off. At the beginning you get a tutorial showing what you need to do to wake the box. You get wood parts and can attach them to other wood pieces but not metal or brick. After successfully waking up Mr. Box, you can go to the next level or restart a level at any time if your having problems figuring out the puzzle. Each level gets more difficult and the puzzles are more challenging. Wake Up the Box is a relaxing and challenging game that will allow you to have fun, not working on TPS reports until the whistle blows. Play Wake Up the Box at FreeWebArcade Similar Articles Productive Geek Tips Stop the Mouse From Waking Up Your Computer from Sleep ModeFix "Sleep Mode Randomly Waking Up" Issue in Windows VistaStop Your Mouse from Waking Up Your Windows 7 ComputerPrevent Windows Asking for a Password on Wake Up from Sleep/StandbyUse Sleep.FM to Wake Up with the Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • SQL SERVER – Difference between DATABASEPROPERTY and DATABASEPROPERTYEX

    - by pinaldave
    Earlier I asked a simple question on Facebook regarding difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server. You can view the original conversation there over here. The conversion immediately became very interesting and lots of healthy discussion happened on facebook page. The best part of having conversation on facebook page is the comfort it provides and leaner commenting interface. Question Question from SQLAuthority.com: What is the difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server? Answer Answer from Rakesh Kumar: DATABASEPROPERTY is supported for backward compatibility but does not provide information about the properties added in this release. Also, many properties supported by DATABASEPROPERTY have been replaced by new properties in DATABASEPROPERTYEX.- source (MSDN). Answer from Alphonso Jones: The only real difference I can see is one, the number of properties contained and the other is that EX returns a sql_variant while DATABASEPROPERTY returns only int. Answer from Ambati Venkatasiva: Both are system meta data functions. DATABASEPROPERTYEX Returns the current setting of the specified database option. DATABASEPROPERTYEX returns the sq-varient value and DATABASEPROPERTY returns integer value. Answer from Rama Sankar Molleti:  Here is the best example about databasepropertyex SELECT DATABASEPROPERTYEX('dbname', 'Collation') Result SQL_1xCompat_CP850_CI_AS Whereas with databaseproperty it retuns nothing as the return type for this is integer. Sql_variant datatype stores values of various sql server supported datatypes except text, ntext, image and timestamp. Answer from Alok Seth:  SELECT DATABASEPROPERTYEX('AdventureWorks', 'Status') DatabaseStatus_DATABASEPROPERTYEX GO --Result - ONLINE SELECT DATABASEPROPERTY('AdventureWorks', 'Status') DatabaseStatus_DATABASEPROPERTY GO --Result - NULL Summary Use DATABASEPROPERTYEX as it is the only function supported in future version as well it returns status of various database properties which does not exists with DATABASEPROPERTY. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • GWB | 30 in 60 Update &ndash; Enrique is almost there!

    - by Staff of Geeks
    We are very close to having our first blogger to reach 30 posts, Enrique Lima.  Stuart Brierley is over the hump with 16 posts and Dave Campbell and Eric Nelson are definitely in the running.  If you don’t know what I am talking about, we are running a contest for our bloggers.  Anyone who blogs on Geekswithblogs who creates 30 posts from May 15th to July 13th will receive a custom Geekswithblogs.net t-shirt with their URL on the back.  This could be their Geekswithblogs.net address or their custom domain.  It is definitely not too late to get started and with TechEd or WWDC right around the corner, there is definitely a lot to talk about. Current Standings: Enrique Lima (28 posts) - http://geekswithblogs.net/enriquelima StuartBrierley (16 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (12 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (10 posts) - http://geekswithblogs.net/iupdateable Christopher House (10 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (7 posts) - http://geekswithblogs.net/mbcrump Chris Williams (6 posts) - http://geekswithblogs.net/cwilliams Michael Stephenson (5 posts) - http://geekswithblogs.net/michaelstephenson Steve Michelotti (5 posts) - http://geekswithblogs.net/michelotti Liam McLennan (5 posts) - http://geekswithblogs.net/liammclennan Follow Us On Twitter: @StaffOfGeeks Technorati Tags: Geekswithblogs,30 in 60,Standings

    Read the article

  • NTP service, offset increasing after sync

    - by Ajay
    I have installed Ubuntu 12.10 version on my PC. I am running NTP service having NTP server as GPS. I found that when we start NTP service by ntp start command, PC is able to sync with GPS as i get '*' symbol before GPS IP when i run ntpq -p command. This remains good for some time and then the * symbol is removed which means that PC is not synchronized to that server. Now, by running command ntpq -p it shows that all parameter are OK but as '*' is removed, slowly offset goes on increasing. remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 7 16 1 2.333 23.799 0.808 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 14 16 3 2.333 23.799 0.879 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 11 16 7 2.333 23.799 1.500 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 8 16 17 2.333 23.799 2.177 below are the last 4 ntp status when sync is lost with GPS ============================================================================== 192.168.100.33 .GPS. 1 u 1 16 377 2.404 1169.94 1.735 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u - 16 377 2.513 1171.80 0.898 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u 15 16 377 2.513 1171.80 0.898 Since, GPS is already available, PC never re-synchronize itself to GPS later ON. I have to restart the ntp service and then PC synchronizes to GPS and '*' symbol arrives.

    Read the article

  • links for 2011-03-02

    - by Bob Rhubart
    Oracle Technology Network Architect Day: Denver Registration is now open. Sessions will cover IT Optimization and consolidation, cloud computing, the evolving role of enterprise IT, and more. (tags: oracle otn entarch event denver) SOA Suite Integration: Part 2: A basic BPEL process (The Shorten Spot) The latest post in Anthony's Shorten's series about SOA Suite integration with Oracle Utilities Application Framework. (tags: oracle otn soa bpel soasuite) ADF: How to create web service based ADF pages The first in promised series of three posts on the topic by Marianne Horsch. (tags: oracle soa webservices adf) David Butler: MDM Poised for Growth (Oracle Master Data Management) David says: "Businesses are talking about the need to fix master data before they can successfully move forward on SOA initiatives. And the growing demands for compliance continue to be a major driver." (tags: oracle otn mdm) Cloud governance is about more than security | The Pervasive Data Center - CNET News Legal and regulatory procedures, transparency, service levels, indemnification, and more are all part of a broader governance landscape that requires IT to work closely with business users. Read this blog post by Gordon Haff on The Pervasive Data Center. (tags: ping.fm) Senthilkumar Rajendran's Blog: Horizontal Scaling OBIEE 11g (tags: ping.fm) InfoQ: Searching Without Objectives Kenneth O. Stanley considers that innovation is stifled when we are strictly following a high goal, and we would progress more when we are inclined to discovery rather than following an objective. (tags: ping.fm) InfoQ: Brownfield Software - Industrial Waste or Business Fertilizer? Josh Graham addresses 10 myths related to working on legacy software, attempting to prove that one can make good use of legacy code without having to rewrite the entire thing. (tags: ping.fm)

    Read the article

  • Oracle Policy Automation at OpenWorld 2012

    - by jeffrey.waterman
    Oracle Policy Automation (OPA)atOpenWorld 2012 Oracle Policy Automation (OPA), the breakthrough policy automation platform, enables organizations to deliver: Consistent policy-based decision making throughout the organization across all channels Agile response to policy changes and analysis Transparency and auditability This year there will be: 8 sessions – combination of customer panels & product strategy sessions Standalone OPA DEMOpod – Moscone Center WEST, W044 Key highlights Hear Davin Fifield discuss the Product Roadmap for OPA (including OPA + RightNow) he will also be joined by Sean Haynes from Stewart Title who will share the success they are having with OPA. OPA Public Sector Customer Panel - This year the OPA panel consists of some of OPA’s most successful & largest customers, speakers include: Department Works & Pension (UK) Toll – Department of Defence (AU) Municipality of Sao Paulo (Brazil) SCHEDULE HIGHLIGHTS Monday October 1, 2012 SESSION ID TIME TITLE LOCATION CON9655 12:15 pm  1:15 pm PST (Pacific Standard Time) Oracle Policy Automation Roadmap: Supercharging the Customer Experience Davin Fifield, VP OPA Development, OracleSean Haynes, VP Stewart Title Westin San Francisco - Metropolitan I CON9700 12:15 m – 1:15 pm PST (Pacific Standard Time) Siebel CRM Overview, Strategy, and RoadmapGeorge Jacob - Group Vice President, CRM Applications / XML, OracleUma Welingkar - Director, Product Management, Oracle Moscone West - 2009 Wednesday October 3, 2012 SESSION ID TIME TITLE LOCATION CON8840 5.00pm – 6.00pm PST (Pacific Standard Time) Achieving Agility Through Closed-Loop Policy AutomationCustomer PanelFacilitator – Surend Dayal, Oracle Dept. Works & Pension (UK) – Haydn Leary Municipality of Sao Paulo (Brazil) - Luiz Cesar Michielin Kiel Toll (AU) – Nigel Maloney   Westin San Francisco - Franciscan I CON8952 5.00pm – 6.00pm PST (Pacific Standard Time) BPM: An Extension Strategy for Enterprise ApplicationsHarish Gaur -  OracleSrikant Subramaniam - Oracle Moscone West - 3003 Thursday October 4, 2012 SESSION ID TIME TITLE LOCATION CON11515 2:15 pm – 3:15 pm PST (Pacific Standard Time) Oracle Policy Automation + RightNow: Agile self-service and agent experiencesDavin Fifield, VP OPA Development, Oracle Westin San Francisco - City

    Read the article

  • Supporting HR Transformation with HelpDesk for Human Resources

    - by Robert Story
    Upcoming WebcastTitle: Supporting HR Transformation with HelpDesk for Human ResourcesDate: May 13, 2010 Time: 9:00 am PDT, 10 am MDT, 17:00 GMT Product Family: PeopleSoft HCM & EBS HRMS Summary HR transformation is a strategic initiative at many companies where world-class employee HR service delivery and a reduction of HR operating costs are top priorities. Having a centralized service delivery model and providing employees with tools to better help themselves can be very key to this initiative. This session shares how Oracle's PeopleSoft HelpDesk for Human Resources provides the technology foundation and best practices for this transformation. HelpDesk for Human Resources now integrates with both PeopleSoft HCM and E-Business Suite HRMS. This one-hour session is recommended for technical and functional users who want to understand what is new in PeopleSoft Help Desk for Human Resources 9.1 and how it benefits both PeopleSoft HCM and E-Business Suite HRMS customers. Topics will include: Understand the latest features and functionality Gain insight into future product direction Plan for implementation or upgrade of this module in your current system A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Where do I place XNA content pipeline references?

    - by Zabby Wabby
    I am relatively new to XNA, and have started to delve into the use of the content pipeline. I have already figured out that tricky issue of adding a game library containing classes for any type of .xml file I want to read. Here's the issue. I am trying to handle the reading of all XML content through use of an XMLHandler object that uses the intermediate deserializer. Any time reading of such data is required, the appropriate method within this object would be called. So, as a simple example, something like this would occur when a character levels: public Spell LevelUp(int levelAchived) { XMLHandler.FindSkillsForLevel(levelAchived); } This method would then read the proper .xml file, sending back the spell for the character to learn. However, the XMLHandler is having issues even being created. I cannot get it to use the using namespace of Microsoft.Xna.Framework.Content.Pipeline. I get an error on my using statement in the XMLHandler class: using Microsoft.Xna.Framework.Content.Pipeline.Serialization.Intermediate; The error is a typical reference error: Type or namespace name "'Pipeline' does not exist in the namespace 'Microsoft.Xna.Framework.Content' (are you missing an assembly reference?)" I THINK this is because this namespace is already referenced in my game's content. I would really have no issue placing this object within my game's content (since that is ALL it deals with anyways), but the Content project does not seem capable of holding anything but content files. In summary, I need to use the Intermediate Deserializer in my main project's logic, but, as far as I can make out, I can't safely reference the associated namespace for it outside of the game's content. I'm not a terribly well-versed programmer, so I may be just missing some big detail I've never learned here. How can I make this object accessible for all projects within the solution? I will gladly post more information if needed!

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • What causes bad performance in consumer apps?

    - by Crashworks
    My Comcast DVR takes at least three seconds to respond to every remote control keypress, making the simple task of watching television into a frustrating button-mashing experience. My iPhone takes at least fifteen seconds to display text messages and crashes ¼ of the times I try to bring up the iPad app; simply receiving and reading an email often takes well over a minute. Even the navcom in my car has mushy and unresponsive controls, often swallowing successive inputs if I make them less than a few seconds apart. These are all fixed-hardware end-consumer appliances for which usability should be paramount, and yet they all fail at basic responsiveness and latency. Their software is just too slow. What's behind this? Is it a technical problem, or a social one? Who or what is responsible? Is it because these were all written in managed, garbage-collected languages rather than native code? Is it the individual programmers who wrote the software for these devices? In all of these cases the app developers knew exactly what hardware platform they were targeting and what its capabilities were; did they not take it into account? Is it the guy who goes around repeating "optimization is the root of all evil," did he lead them astray? Was it a mentality of "oh it's just an additional 100ms" each time until all those milliseconds add up to minutes? Is it my fault, for having bought these products in the first place? This is a subjective question, with no single answer, but I'm often frustrated to see so many answers here saying "oh, don't worry about code speed, performance doesn't matter" when clearly at some point it does matter for the end-user who gets stuck with a slow, unresponsive, awful experience. So, at what point did things go wrong for these products? What can we as programmers do to avoid inflicting this pain on our own customers?

    Read the article

  • Lost in Translation

    - by antony.reynolds
    Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I interacted spoke better English than I did, the screens and data for the implementation were all in Hebrew, meaning the Hebrew alphabet.  Over the week I learnt to recognize a few Hebrew words, enough to enable me to test what we were doing.  So I knew SOA Suite worked OK with non-English and non-Latin character sets so I was suspicious recently when a customer was having data corruption of non-Latin characters.  On investigation it turned out that the data received correctly in the SOA Suite, but then it was corrupted after being stored in the database. A little investigation revealed that the customer was using the default database character set, which is “WE8ISO8859P1” which, as the name suggests only supports West European 8-bit characters.  What was happening was that when the customer had installed his SOA repository he had ignored the message that his database was not using AL32UTF as the character. After changing the character set on his database he no longer saw the corruption of non-English character data. So the moral of this story is Always install the SOA Repository in to an AL32UTF8 Database This is true for both SOA Suite 10g and 11g.  Ignore it at your peril, because you never know when you will need to support Hebrew, or Japanese or another multi-byte character set.

    Read the article

  • SQLAuthority News – SQL Server Interview Questions And Answers Book Summary

    - by pinaldave
    Today we are using computers for various activities, motor vehicles for traveling to places, and mobile phones for conversation. How many of us can claim the invention of micro-processor, a basic wheel, or the telegraph? Similarly, this book was not written overnight. The journey of this book goes many years back with many individuals to be thanked for. To begin with, we want to thank all those interviewers who reject interviewees by saying they need to know ‘the key things’ regardless of having high grades in class. The whole concept of interview questions and answers revolves around knowing those ‘key things’. The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL Server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. Though each of these chapters are geared towards convenience we highly recommend reading each of the sections irrespective of the roles you might be doing since each of the sections have some interesting trivia about working with SQL Server. In the industry the role of accidental DBA’s (especially with SQL Server) is very common. Hence if you have performed the role of DBA for a short stint and want to brush-up your fundamentals then the upcoming sections will be a great review. Table Of Contents Database Concepts With Sql Server Common Generic Questions & Answers Common Developer Questions Common Tricky Questions Miscellaneous Questions On Sql Server 2008 Dba Skills Related Questions Data Warehousing Interview Questions & Answers General Best Practices [Amazon] | [Flipkart] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • C # - a variable using the Encrypt md5

    - by Guilherme Cardoso
    When we are dealing with more sensitive data and important as a keyword, it is not appropriate at all stores them in database without encrypting for security reasons.  For this we use MD5  MD5 is an algorithm that allow us to encript an string, but doesn't leave us desencrypt it (not sure if it is already possible, but at least I know there are many databases already having a record).  The method below will return us a variable encrypted with md5. For example: md5_encriptar (pontonetpt.com ");   The result will be: 34efe85d338075834ad41803eb08c0df This way we save tthese encrypted data into a database, and then to make comparisons we often use the method to compare with the records kept. public string md5_encrypt(string md5) { System.Security.Cryptography.MD5CryptoServiceProvider x = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] bs = System.Text.Encoding.UTF8.GetBytes(md5); bs = x.ComputeHash(bs); System.Text.StringBuilder s = new System.Text.StringBuilder(); foreach (byte b in bs) { s.Append(b.ToString("x2").ToLower()); } string password = s.ToString(); return password; }

    Read the article

  • Keyboard input system handling

    - by The Communist Duck
    Note: I have to poll, rather than do callbacks because of API limitations (SFML). I also apologize for the lack of a 'decent' title. I think I have two questions here; how to register the input I'm receiving, and what to do with it. Handling Input I'm talking about after the fact you've registered that the 'A' key has been pressed, for example, and how to do it from there. I've seen an array of the whole keyboard, something like: bool keyboard[256]; //And each input loop check the state of every key on the keyboard But this seems inefficient. Not only are you coupling the key 'A' to 'player moving left', for example, but it checks every key, 30-60 times a second. I then tried another system which just looked for keys it wanted. std::map< unsigned char, Key keyMap; //Key stores the keycode, and whether it's been pressed. Then, I declare a load of const unsigned char called 'Quit' or 'PlayerLeft'. input-BindKey(Keys::PlayerLeft, KeyCode::A); //so now you can check if PlayerLeft, rather than if A. However, the problem with this is I cannot now type a name, for example, without having to bind every single key. Then, I have the second problem, which I cannot really think of a good solution for: Sending Input I now know that the A key has been pressed or that playerLeft is true. But how do I go from here? I thought about just checking if(input-IsKeyDown(Key::PlayerLeft) { player.MoveLeft(); } This couples the input greatly to the entities, and I find it rather messy. I'd prefer the player to handle its own movement when it gets updated. I thought some kind of event system could work, but I do not know how to go with it. (I heard signals and slots was good for this kind of work, but it's apparently very slow and I cannot see how it'd fit). Thanks.

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Monster's AI in an Action-RPG

    - by Andrea Tucci
    I'm developing an action rpg with some University colleagues. We've gotton to the monsters' AI design and we would like to implement a sort of "utility-based AI" so we have a "thinker" that assigns a numeric value on all the monster's decisions and we choose the highest (or the most appropriate, depending on monster's iq) and assign it in the monster's collection of decisions (like a goal-driven design pattern) . One solution we found is to write a mathematical formula for each decision, with all the important parameters for evaluation (so for a spell-decision we might have mp,distance from player, player's hp etc). This formula also has coefficients representing some of monster's behaviour (in this way we can alterate formulas by changing coefficients). I've also read how "fuzzy logic" works; I was fascinated by it and by the many ways of expansion it has. I was wondering how we could use this technique to give our AI more semplicity, as in create evaluations with fuzzy rules such as IF player_far AND mp_high AND hp_high THEN very_Desiderable (for a spell having an high casting-time and consume high mp) and then 'defuzz' it. In this way it's also simple to create a monster behaviour by creating ad-hoc rules for every monster's IQ category. But is it correct using fuzzy logic in a game with many parameters like an rpg? Is there a way of merging these two techniques? Are there better AI design techniques for evaluating monster's chooses?

    Read the article

  • Fix invalid objects and components - BEFORE you upgrade!

    - by Mike Dietrich
    We are currently running a Tech Challange Workshop with 25 Oracle consultants and support folks from all over EMEA. We call it Tech Challange because we seperate these experts having between 5 and 20 years of Oracle experience into 5 groups - and each group has to complete their special challange such as moving a database from 10.2 to Exadata V2 or upgrading from single instance 10.2 to Real Application Clusters 11.2 with the new Grid Infrastructure. Actually we start this training with a bit presentation pieces about upgrades, Real Application Testing and Golden Gate. And one topic I always point out: Keep your database tidy before the upgrade!!! Clean up all invalid objects - especially in SYS and SYSTEM user schema BEFORE you upgrade. Use utlrp.sql to recompile invalid objects. Use Note:753041.1 to diagnose and fix invalid components. Do this always BEFORE you start the upgrade. Even if it may take some time. Otherwise your upgrade could fails or significant parts of the database packages could be invalid after the upgrade as well. I just came across this today as one group had ~240 invalid objects in the database - and due to the fact that the original system was still there could proof that the objects had been invalid before. Good job, BUT ... :-)

    Read the article

  • Unable to decide weather to continue or quit and start a new carrier

    - by latif mohammad khan
    I am working in a small company. I have joined here for java developer ,but they told as i am fresher so work as Android developer . Then i asked one of my lecturer about Android developement.then he replied why going for mobile developement which is not standard(as nokia's symbain lost , mobile os changes quickly ) its better to get job as Java Developer. By listening his words i was bit not satisfied with my job and thought of leaving Job and search as java developer. But i dont have much confidence to search a job at that time(because i got job after 1 and half years after i passed out), i have decided to work as android developer(as learning new technology and practice java at home). On the first day they introduced me to team leads and they assigned under him. After few days i came to know that my team lead is having only 1 year experience. He(my team lead) joined here as a fresher and done r&d now is my Team lead. If i ask any doubt to him , he just search in internet and reply's my question (some times he explains wrongly) i correct it by myself by searching in net.In my company they don't use latest technologies,they dont follow any design patterns because they dont know them. They provide me very less pay and more work, i dont bother about pay because i am fresher but i bother about work which is not use(I feel like that because they dont use latest technologies,no design patterns,no proper team lead) What i thought was to learn from the company, Team leads how the project done. But there I feel like, i am wasting my time.If i go for another interview in future they ask latest technologies. Now i dont know what to do weather to quit the job and learn another language which have good demand like sap abap or to continue here. please provide me advice Thanks.

    Read the article

  • AABB Sweeping, algorithm to solve "stacking box" problem

    - by Ivo Wetzel
    I'm currently working on a simple AABB collision system and after some fiddling the sweeping of a single box vs. another and the calculation of the response velocity needed to push them apart works flawlessly. Now on to the new problem, imagine I'm having a stack of boxes which are falling towards a ground box which isn't moving: Each of these boxes has a vertical velocity for the "gravity" value, let's say this velocity is 5. Now, the result is that they all fall into each other: The reason is obvious, since all the boxes have a downward velocity of 5, this results in no collisions when calculating the relative velocity between the boxes during sweeping. Note: The red ground box here is static (always 0 velocity, can utilize spatial partitioning ), and all dynamic static collisions are resolved first, thus the fact that the boxes stop correctly at this ground box. So, this seems to be simply an issue with the order the boxes are sweept against each other. I imagine that sorting the boxes based on their x and y velocities and then sweeping these groups correctly against each other may resolve this issues. So, I'm looking for algorithms / examples on how to implement such a system. The code can be found here: https://github.com/BonsaiDen/aabb The two files which are of interest are [box/Dynamic.lua][3] and [box/Manager.lua][4]. The project is using Love2D in case you want to run it.

    Read the article

  • Red Gate and the Community

    - by RedAndTheCommunity
    I was lucky enough to join the Communities team in April 2011, having worked in the equally awesome (but more number-crunchy), Finance team at Red Gate for about four years before that. Being totally passionate about Red Gate, and easily excitable, it seems like the perfect place to be. Not only do I get to talk to people who love Red Gate every day, I get to think up new ways to make them love us even more. Red Gate sponsored 178 SQL Server and .NET events and user group meetings in 2011. They ranged from SQL Saturdays and Code Camps to 10 person user group meetings, from California to Krakow. We've given away cash, software, Kindles, and of course swag. The Marketing Cupboard is like a wonderland of Red Gate goodies; it is guarded day and night to make sure the greedy Red Gaters don't pilfer the treasure inside. There are Red Gate yo-yos, books, pens, ice scrapers and, over the Holidays, there were some special bears. We had to double the patrols guarding the cupboard to protect them. You can see why: Over the Holidays, we gave funding and special Holiday swag (including the adorable bears), to 10 lucky user groups, who held Christmas parties - doing everything from theatre trips to going to shooting ranges. What next? So, what about this year? In 2012 our main aim is to be out there meeting more of you. So get ready to see an army of geeks in red t-shirts at your next event! We also want to do more fun things like our Christmas party giveaway. What cool ideas do you have for sponsorship in 2012? An Easter Egg hunt with SQL server clues? A coding competition? A duelling contest with a license of SQL Toolbelt for the winner? Let me know.

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • Tell me what&rsquo;s wrong! &ndash; An XNA sample demonstrating exception handling and reporting in

    - by George Clingerman
    I’ve always enjoyed using Nick Gravelyn’s exception handling in all of my games. You’re always going to encounter those unhandled exception that your players are going to ferret out and having a method to display them rather than just crashing to the dashboard is definitely more of an elegant solution. But the other day I got to thinking…what if we could do more? What if instead of just displaying the error, we could encourage the players to send us the error. So I started playing with that an expanding upon Nick’s sample code to see what I could come up with. I got close to what I envisioned, but unfortunately there were some limitations to just what the XNA API could do. In my head I was picturing the players hitting “Send Message” and a 360 message would just be sent to the XBLIG developer. Unfortunately, you can only send messages in an XNA game to someone you’re currently in a network session with. Since I didn’t want to have a 360 server running all the time, virally connecting to players just to get error messages, I did the next best thing and just open up a 360 message and encourage them to manually enter the gamertag. Maybe someday we’ll be able to do that a little better, but this works for now. In the sample, players can hit the “A” button or key to generate in an exception. If the debugger is not attached, then the Exception message screen will be shown explaining what has happened and giving the player a chance to send a 360 message to the gamertag provided or maybe even just send an email. Nick’s code has been changed just a bit. It now accepts any PlayerIndex (no longer hard coded to just PlayerIndex.One) and it no longer uses a MessageBox to get the users selection. The code has also been modified so that it works both for the 360 and for the PC. Check out “Tell me what’s wrong!” and let me know if you have any thoughts or suggestions. I really do appreciate the feedback.

    Read the article

  • Ruby Shoes for non-trivial apps

    - by marcof
    I've been taking a look at Ruby Shoes for GUI development with Ruby. So far, it's been a pretty good experience for making simple apps. However, I am quite worried about being able to write large scale applications with it. For example, how would I go about using MVP pattern with this framework ? For now, I have not been able to not make presentation concerns leak into the view because of the lack of some kind of "data binding". I have code that looks like this : Shoes.app do @view = SampleView.new @presenter = SamplePresenter.new @view @label = para @view.sample_property button "Update sample_property" do @presenter.update_sample_property end end Here, the call to @presenter.update_sample_property updates @view.sample_property but the label is not updated accordingly. For this to work, I would have to make @presenter.update_sample_property to return a string, and then call @label.text = return_value, but I think that would violate the MVP principle of not having presentation logic in the view. I'm used to work in .Net with the MVP pattern so I don't know if the pattern applies correctly to Shoes like I tried to do. Are there any ressources out there for making non-trivial apps with Shoes ? Especially using the MVP pattern or something similar ? EDIT : I took a look at the shoebox to see what other people have achieved with the framework. Though I did not look through it extensively, at first sight it seems like they are all simple projects with no real purposes.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >