Search Results

Search found 549 results on 22 pages for 'nosql'.

Page 7/22 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • My Speaking Engagements in the Last Two Months

    - by gsusx
    I’ve been so busy lately with the activities around Moesion that I haven’t had time to blog about a couple of great conferences I had the opportunity to speak at in the last two months. Software Architect Conference, UK ( http://www.software-architect.co.uk/ ) This conference is becoming one of my favorite events of the year. As always Nick Payne and his team did a remarkable job lining up an all-star group of speakers that covered some of the hottest topics in today’s software industry. The first...(read more)

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Digg Moves From MySQL to NoSQL

    <b>Datamation:</b> "Social networking and voting site Digg is rewriting its underlying software infrastructure in an effort to improve performance and scalability. Part of that effort involves moving away from the MySQL database that has helped to power Digg since its creation."

    Read the article

  • Rails and Mongoid best way to implement sharing system

    - by Matteo Pagliazzi
    I have to model User and Board in rails using mongoid as ODM. Each board is referenced to an user through a foreign key user_id and now I want to add the ability to share a board with other users. Following CRUD I'd create a new Model called something like Share and it's releated Controller with the ability to create/edit/delete a Share but I have some doubts: First, where to save informations about Shares? I think I may create a field in the Board's collection called shared_with including an array of user ids. in a MySQL I'd created a new table with the ids of who share, the resource shared and the user the resources is shared with but I don't think that's necessary using MongoDB. Every user a Board is shared with should be able to edit the Board (but not to delete it) so the Board should have two relations one with the owner and another with the users the board is shared with, right? For permission (the owner should be able to delete a board but the users it is shared with shouldn't) what to use? I'm using Devise for authentication but I think something like CanCan would fit better. but how to implement it? What do you think about this way? Do you find any problems or have better solutions?

    Read the article

  • SQL Azure Federations and Semantic Search by the SQL product team in London tonight (Monday)

    - by simonsabin
    Don’t forget that tonight we have Michael Rys from the SQL Server Product Team presenting on the Federation support coming to SQL Azure and the Semantic search coming in SQL Server Denali. This is a must attend evening for anyone that is serious about scaling SQL or doing search in SQL Server. Michael also has a few other hats including Microsoft’s representative on the W3C XML Query Working Group. To register go to http://sqlsocial20110613.eventbrite.com/   Ps Beer and Pizza will be laid on...(read more)

    Read the article

  • How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

    - by Orgad Kimchi
    This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster: • You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster. • In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort. • You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools. • ZFS built-in compression provides optimized disk I/O utilization for better I/O performance. In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

    Read the article

  • How would one build a relational database on a key-value store, a-la Berkeley DB's SQL interface?

    - by coleifer
    I've been checking out Berkeley DB and was impressed to find that it supported a SQL interface that is "nearly identical" to SQLite. http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbsqlbasics.html#identicalusage I'm very curious, at a high-level, how this kind of interface might have been architected. For instance: since values are "transparent", how do you efficiently query and sort by value how are limits and offsets performed efficiently on large result sets how would the keys be structured and serialized for good average-case performance

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Where can I learn to write my own database?

    - by Buttons840
    I'm interested in writing my own database - a triple-store. Are there any good resources to help with the challenges of such a project? Or more generally: How can I learn to write my own database? Some specific issues I'm unsure of: How is the data actually stored on the file-system? A flat-file seems easy enough, but a database is a lot more then a flat-file. What kinds of things are typically stored (or cached) in memory? How are indexes created and stored? How is ACID compliance achieved? Etc. This is a big topic, but knowing how to store large amounts of data in a reliable way is good to know. (My investigation into existing triple-stores was summarized back in 2008; not much has changed in 4 years it seems. This is why I want write my own.)

    Read the article

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • Dynamic Fields/Columns

    - by DanMark
    What is the best way to allow for dynamic fields/database columns? For example, let's say we have a payroll system that allows a user to create unique salary structures for each employee. How could/should one handle this scenario? I thought of using a "salary" table that hold the salary component fields and joining these columns to a "salary_values" table that hold the actual values. Does this make sense? Example Salary Structures: Notice how the components of the salary can be shared or unique. -- Jon's Salary -- Basic 100 Annual Bonus 25 Tel. Allowances 15 -- Jane's Salary -- Basic 100 Travel Allowances 10 Bi-annual Bonus 30

    Read the article

  • Is there a "rigorous" method for choosing a database?

    - by Andrew Martin
    I'm not experienced with NoSQL, but one person on my team is calling for its use. I believe our data and its usage isn't optimal for a NoSQL implementation. However, my understanding is based off reading various threads on various websties. I'd like to get some stronger evidence as to who's correct. My question is therefore, "Is there a technique for estimating the performance and requirements of a certain database, that I could use to confirm or modify my intuitions?". Is there, for example, a good book for calculating the performance of equivalent MongoDB/MySQL schema? Is the only really reliable option to build the whole thing and take metrics?

    Read the article

  • Faut-il en finir avec la mode NoSQL ? Ou est-ce plus qu'une simple mode passagère ?

    Faut-il en finir avec la mode NoSQL ? Ou est-ce plus qu'une simple mode passagère ? La question est volontairement provocante. Elle est posée, en des termes encore plus crus, par Ted Dziuba dans un billet intitulé « I Can't Wait for NoSQL to Die ». « Certains ingénieurs pensent que l'évolutivité et l'architecture sont la solution [de tous les problèmes]. C'est comme cela qu'est né le mouvement NoSQL », y écrit-il. « L'idée développée avec NoSQL est que toutes les bases de données relationnelles, telles que MySQL et PostgreSQL, sont caduques et que les bases de données fondées sur des documents ou les bases de données sans schéma représentent l'avenir». ...

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • Middleware for MongoDB or CouchDB with jQuery Ajax/JSON frontend

    - by Tauren
    I've been using the following web development stack for a few years: java/spring/hibernate/mysql/jetty/wicket/jquery For certain requirements, I'm considering switching to a NoSQL datastore with an AJAX frontend. I would probably build the frontend with jQuery and communicate with the web application middleware using JSON. I'm leaning toward MongoDB because of more dynamic query capabilities, but am still considering CouchDB. I'm not sure what to use in the middle. Probably something RESTful? My preference is to stick with Java (or maybe Scala or Groovy) since I'm using tools like Drools for rules and Shiro for security. But then again, I want to pick something that is quick an easy to work with, so I'm open to other solutions. If you are building ajax/json/nosql solutions, I'd like to hear details about what tools you are using and any pros/cons you've found to using them. Thanks!

    Read the article

  • Best XML Based Database

    - by monmonja
    I had been assigned to develop a system on where we would get a XML from multiple sources (millions of xml) and put them in some database like and judging from the xml i would receive, there wont be any concrete structure even if they are from the same source. With this reason i think i cannot suggest RDMS and currently looking at NoSQL databases. We need a system that could do CRUD and is fast on Read. I had been looking at MarkLogic and eXist, which are both XML based NoSQL databases, have anyone had experience with them? and any other suggestion? Thanks

    Read the article

  • Tracking pageviews and displaying related data

    - by zeky
    I want track which articles a user read on a website. Then with that data, be able to know: 1) - top N articles read in the last hour/day/week/month 2) - show recommendations ("users who read this, also read that") 3) - same as (1), but for a specific section on the site Since the site has high traffic ( 1M views/day) i can't use a RDBMS for this. I started to look at NoSQL (cassandra specifically) and since it's all new to me i'm not sure it's what i need or not. I'm possitive i'm not the first one who needs something like this but couldn't find links/articles giving me pointers on how to do something like this. Is NoSQL the best aproach? Any tips on the data model? Thanks.

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >