Search Results

Search found 1148 results on 46 pages for 'backbone relational'.

Page 5/46 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • How to Integrate Backbone.js with RESTful Web Services in 5 Minutes!

    - by Geertjan
    In NetBeans IDE 7.3, a Backbone.js file can be generated from a Java RESTful web service. The Backbone.js file contains complete CRUD functionality and your HTML5 application can immediately be deployed to make use of those features. Coupled with the NetBeans IDE two-way editing support for HTML5, via interaction with WebKit in Chrome, Backbone.js users have a completely new and powerful tool for coding their HTML5 applications. The above is illustrated via the brand new YouTube movie below: This makes NetBeans IDE 7.3 well suited as a learning tool for new Backbone.js users, as well as a productivity tool for those who are comfortable with Backbone.js already.

    Read the article

  • backbone.js Model.get() returns undefined, scope using coffeescript + coffee toaster?

    - by benipsen
    I'm writing an app using coffeescript with coffee toaster (an awesome NPM module for stitching) that builds my app.js file. Lots of my application classes and templates require info about the current user so I have an instance of class User (extends Backbone.Model) stored as a property of my main Application class (extends Backbone.Router). As part of the initialization routine I grab the user from the server (which takes care of authentication, roles, account switching etc.). Here's that coffeescript: @user = new models.User @user.fetch() console.log(@user) console.log(@user.get('email')) The first logging statement outputs the correct Backbone.Model attributes object in the console just as it should: User _changing: false _escapedAttributes: Object _pending: Object _previousAttributes: Object _silent: Object attributes: Object account: Object created_on: "1983-12-13 00:00:00" email: "[email protected]" icon: "0" id: "1" last_login: "2012-06-07 02:31:38" name: "Ben Ipsen" roles: Object __proto__: Object changed: Object cid: "c0" id: "1" __proto__: ctor app.js:228 However, the second returns undefined despite the model attributes clearly being there in the console when logged. And just to make things even more interesting, typing "window.app.user.get('email')" into the console manually returns the expected value of "[email protected]"... ? Just for reference, here's how the initialize method compiles into my app.js file: Application.prototype.initialize = function() { var isMobile; isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|BlackBerry)/); this.helpers = new views.DOMHelpers().initialize().setup_viewport(isMobile); this.user = new models.User(); this.user.fetch(); console.log(this.user); console.log(this.user.get('email')); return this; }; I initialize the Application controller in my static HTML like so: jQuery(document).ready(function(){ window.app = new controllers.Application(); }); Suggestions please and thank you!

    Read the article

  • How to fetch and populate backbone model for Google Places JS API?

    - by code-gijoe
    I'm implementing a system that require access to Google Places JS API. I've been using rails for most of the project, but now I want to inject a bit of AJAX in one of my views. Basically it is a view that displays places near your location. For this, I'm using the JS API of Google places. A quick workflow would be: 1- The user inputs a text query and hits enter. 2- There is an AJAX call to request data from Google Places API. 3- The successful result is presented to the user. The problem is primarily in step 2. I want to use backbone for this but when I create a backbone model, it requests to the 'rootURL'. This wouldn't be a problem if the requests to Places was done from the server but it is not. A place call is done like this: service = new google.maps.places.PlacesService(map); service.nearbySearch(request, callback); Passing a callback function: function callback(results, status) { if (status == google.maps.places.PlacesServiceStatus.OK) { for (var i = 0; i < results.length; i++) { var place = results[i]; createMarker(results[i]); } } } Is it possible to override the 'fetch' method in backbone model and populate the model with the successful Places result? Is this a bad idea?

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Book Review: Pro SQL Server 2008 Relational Database Design and Implementation

    - by Alexander Kuznetsov
    Investing in proper database design is a very efficient way to cut maintenance costs. If we expect a system to last, we need to make sure it has a good solid foundation - high quality database design. Surely we can and sometimes do cut corners and save on database design to get things done faster. Unfortunately, such cutting corners frequently comes back and bites us: we may end up spending a lot of time solving issues caused by poor design. So, solid understanding of relational database design is...(read more)

    Read the article

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Going Beyond the Relational Model with Data

    SQL is a powerful tool for querying data, and for aggregating it. However, you can't easily use it to draw inferences, to make predictions, or to tease out subtle correlations. To provide ever more sophisticated inferences to businesses, the race is on to combine the power of the relational model with advanced statistical packages. Both IBM and PostGres are ready with solutions. And SQL Server? Hmm...

    Read the article

  • Multiple collections tied to one base collection with filters and eventing

    - by damienc88
    I have a complex model served from my back end, which has a bunch of regular attributes, some nested models, and a couple of collections. My page has two tables, one for invalid items, and one for valid items. The items in question are from one of the nested collections. Let's call it baseModel.documentCollection, implementing DocumentsCollection. I don't want any filtration code in my Marionette.CompositeViews, so what I've done is the following (note, duplicated for the 'valid' case): var invalidDocsCollection = new DocumentsCollection( baseModel.documentCollection.filter(function(item) { return !item.isValidItem(); }) ); var invalidTableView = new BookIn.PendingBookInRequestItemsCollectionView({ collection: app.collections.invalidDocsCollection }); layout.invalidDocsRegion.show(invalidTableView); This is fine for actually populating two tables independently, from one base collection. But I'm not getting the whole event pipeline down to the base collection, obviously. This means when a document's validity is changed, there's no neat way of it shifting to the other collection, therefore the other view. What I'm after is a nice way of having a base collection that I can have filter collections sit on top of. Any suggestions?

    Read the article

  • Running Radius on a Novell Backbone

    - by YsoL8
    Hello I am a rookie network engineer and I've been asked to create a secure wireless system intergrated with an existing network. So far I'd decided to use 802.1x secuity with a Radius enabled server over a Novell backbone. My question is: does Novell still support this type of server setup? I heard rumours it is at the end of it's supported life and I'd like some confirmation. Also can I get some recommendations on better backbone / server providers. Cheers

    Read the article

  • structure problem in Relational DBMS creation

    - by Kane
    For learning and understanding purpose, I currently want to try to make a small relational DBMS with simple features like (for now) only sequential reading/writing and CREATE TABLE, INSERT, SELECT, UPDATE and DELETE management. I am currently on the "think" part of the project and I am stuck on the way to store the read data in memory. First I was thinking of putting them properly on a structure, but the problem is that tables are all different, know the type of each column is not an issue, but I am not sure C provide a way to make fully dynamic structure. My second and current idea is to make a simple char array of the required length and just get the data by order with cast. But I am not sure if it is the good way to do that part, so I wanted to ask for your opinion and advices about that. Thanks in advance for your help. nb: I hope my question is enough clear and understandable, I still lack of pratice in english

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • A Keyword Research Service is the Backbone of SEO

    If you do not have the means or know-how to effectively research keywords, you may hire a company to do professional keyword research service for you. Even if you have a fundamental knowledge of keywords, paid keyword services will save time and create better results.

    Read the article

  • What is the best database for my needs?

    - by Mr. Flibble
    I am currently using MS SQL Server 2008 but I'm not sure it it is the best system for this particular task. I have a single table like so: PK_ptA PK_ptB DateInserted LookupColA LookupColB ... LookupColF DataCol (ntext) A common query is SELECT TOP(1000000) DataCol FROM table WHERE LookupColA=x AND LookupColD=y AND LookupColE=z ORDER BY DateInserted DESC The table has about a billion rows with 5 million inserted per day. My main problem with SQL Server is that it isn't too easy to shard or spread out the datafiles. Also, exporting seems to max out at 1000rows per second (about 1MB/s) which seems very slow. Another problem I have is, with SQL Server, if I want to add a new LookupCol the log file grows enormously requiring a large amount of rarely used free space on tap. Are there any obvious better solutions for this problem?

    Read the article

  • Non-Relational Database Design

    - by Ian Varley
    I'm interested in hearing about design strategies you have used with non-relational "nosql" databases - that is, the (mostly new) class of data stores that don't use traditional relational design or SQL (such as Hypertable, CouchDB, SimpleDB, Google App Engine datastore, Voldemort, Cassandra, SQL Data Services, etc.). They're also often referred to as "key/value stores", and at base they act like giant distributed persistent hash tables. Specifically, I want to learn about the differences in conceptual data design with these new databases. What's easier, what's harder, what can't be done at all? Have you come up with alternate designs that work much better in the non-relational world? Have you hit your head against anything that seems impossible? Have you bridged the gap with any design patterns, e.g. to translate from one to the other? Do you even do explicit data models at all now (e.g. in UML) or have you chucked them entirely in favor of semi-structured / document-oriented data blobs? Do you miss any of the major extra services that RDBMSes provide, like relational integrity, arbitrarily complex transaction support, triggers, etc? I come from a SQL relational DB background, so normalization is in my blood. That said, I get the advantages of non-relational databases for simplicity and scaling, and my gut tells me that there has to be a richer overlap of design capabilities. What have you done? FYI, there have been StackOverflow discussions on similar topics here: the next generation of databases changing schemas to work with Google App Engine choosing a document-oriented database

    Read the article

  • Entity framework (1): implement 1 foreign key to multiple tables

    - by Michel
    Hi, i've modeled this: i have an import table, and an import steps table import 1 .. N importsteps Now i have a table importparams, which hold key/value pairs to register all kind of info about the import or the importsteps. So i have modeled a FK in SqlServer which points to the PK of the import table and to the PK of the importsteps table (the ID's for both the import as the importsteps table are guids, so i can query the importparams with either the id from import or from importsteps and get the right importparams). Makes sense a bit? But how can i model this in the EF? I can see it's a bit hard for the EF to model this, because one realtion can point to multiple classes, but is there a way? The workaround normally is just to get all importparams where FK is the ID, but as you know the FK is not available in the EF version 1. I hope you can help me out, michel

    Read the article

  • Is there an ORM with APIs to be used programatically?

    - by Kabeer
    Hello. Neither did not get any response for my previous question, nor enough views :) So here I am now framing my query afresh. Is there any ORM that offers APIs to be used programatically? In my situation, a user will be helped through a wizard to define some entities. Thereafter those entites will be created in a database in form of tables (there will of course be some improvization). I need an ORM that offers APIs for modeling and creating entities during application runtime and not just during design time. Mine is a .Net application, therefore I was looking at Entity Framework. However, to me it looked tied to Visual Studio (I may be wrong also as I am new to it). Any recommendations? Or alternative point of view?

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • Should I implement BackBone.js into my ASP.NET WebForms applications?

    - by Walter Stabosz
    Background I'm trying to improve my group's current web app development pattern. Our current pattern is something we came up with while trying to rich web apps on top of ASP.NET WebForms (none of us knew ASP.NET MVC). This is the current pattern: ! Our application is using the WinForms Framework. Our ASPX pages are essentially just HTML, we use almost no WebControls. We use JavaScript/jQuery to perform all of our UI events and AJAX calls. For a single ASPX page, we have a single .js file. All of our AJAX calls are POSTs (not RESTful at all) Our AJAX calls contact WebMethods which we have defined in a series of ASMX files. One ASMX file per business object. Why Change? I want to revise our pattern a bit for a couple of reasons: We're starting to find that our JavaScript files are getting a bit unwieldy. We're using a hodgepodge of methods for keeping our local data and DOM updates in sync. We seem to spend too much time writing code to keep things in sync, and it can get tricky to debug. I've been reading Developing Backbone.js Applications and I like a lot of what Backbone has to offer in terms of code organization and separation of concerns. However, I've gotten to the chapter on RESTful app, I started to feel some hesitation about using Backbone. The Problem The problem is our WebMethods do not really fit into the RESTful pattern, which seems to be the way Backbone wants to consume them. For now, I'd only like to address our issue of disorganized client side code. I'd like to avoid major rewrites to our WebMethods. My Questions Is it possible to use Backbone (or a similar library) to clean up our client code, while not majorly impacting our data access WebMethods? Or would trying to use Backbone in this manner be a bastardization of it's intended use? Anyone have any suggestions for improving our pattern in the area of code organization and spending less time writing DOM and data sync code?

    Read the article

  • Backbone.js: How to utilize router.navigate to manipulate browser history?

    - by Xavier_Ex
    I am writing something like a registration process containing several steps, and I want to make it a single-page like system so after some studying Backbone.js is my choice. Every time the user completes the current step they will click on a NEXT button I create and I use the router.navigate method to update the url, as well as loading the content of the next page and doing some fancy transition with javascript. Result is, URL is updated which the page is not refreshed, giving a smooth user experience. However, when the user clicks on the back button of the browser, the URL gets updated to that of a previous step, but the content stays the same. My question is through what way I can capture such an event and currently load the content of the previous step and present that to the user? Or even better, can I rely on browser cache to load that previously loaded page? EDIT: in particular, I'm trying something like mentioned in this article.

    Read the article

  • Online Introduction to Relational Databases (and not only) with Stanford University!

    - by Luca Zavarella
    How many of you know exactly the definition of "relational database"? What exactly the adjective "relational" refers to? Many of you allow themselves to be deceived, thinking this adjective is related to foreign key constraints between tables. Instead this adjective lurks in a world based on set theory, relational algebra and the concept of relationship intended as a table.Well, for those who want to deep the fundamentals of relational model, relational algebra, XML, OLAP and emerging "NoSQL" systems, Stanford University School of Engineering offers a public and free online introductory course to databases. This is the related web page: http://www.db-class.com/ The course will last 2 months, after which there will be a final exam. Passing the final exam will entitle the participants to receive a statement of accomplishment. A syllabus and more information is available here. Happy eLearning to you!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >