Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 527/588 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • ruby on rails one-to-many relationship

    - by fenec
    I would like to model a betting system relationship using the power of rails. so lets start with doing something very simple modelling the relationship from a user to a bet.i would like to have a model bet with 2 primary keys. here are my migrations enter code here class CreateBets < ActiveRecord::Migration def self.up create_table :bets do |t| t.integer :user_1_id t.integer :user_2_id t.integer :amount t.timestamps end end def self.down drop_table :bets end end class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.string :name t.timestamps end end def self.down drop_table :users end end the models enter code here class Bet < ActiveRecord::Base belongs_to :user_1,:class_name=:User belongs_to :user_2,:class_name=:User end class User < ActiveRecord::Base has_many :bets, :foreign_key =:user_1) has_many :bets, :foreign_key =:user_2) end when i test here in the console my relationships I got an error enter code here u1=User.create :name="aa" = # u2=User.create :name="bb" = # b=Bet.create(:user_1=u1,:user_2=u2) *****error***** QUESTIONS: 1 How do I define the relationships between these tables correctly? 2 are there any conventions to name the attributes (ex:user_1_id...) thank you for your help

    Read the article

  • Nhibernate: Stop it from joining to a table that is not needed

    - by Aaron
    I have two tables (tbArea, tbPost) that relate to the following classes. class Area { int ID string Name ... } class Post { int ID string Title Area Area ... } These two classes map up with Fluent Nhibernate. Below is the post mapping. public class PostMapping : ClassMap<Post> { public PostMapping() { Cache.NonStrictReadWrite(); this.Table("tbPost"); Id(x => x.ID) .Column("PostID") .GeneratedBy .Identity(); References(x => x.Area) .ForeignKey("AreaID") .Column("AreaID"); ... } } Any time I perform a query on the Post table "where AreaID = 1(any AreaId)", nhibernate will join to the area table. (What Nhibernate generates for a query) SELECT post fields , area fields (automatically added) FROM tbPost p LEFT JOIN tbArea a on p.areaid = a.areaid where p.areaid = 1 I have tried setting Area to LazyLoad, to Fetch.Select, ReadOnly, and any other setting on the reference and still it will always join to Area. I am trying to optimize the backend database queries, and since I don't need the area object loaded just filtered I would like to eliminate the unnecessary join to Area each time I Query post. What configurations do I need to change or mappings to get area to still be related to post in my objects, but not query it when I filter on AreaID?

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • ASP.Net forms authentication - multiple providers

    - by Chris Klepeis
    I have an ASP.Net 4.0 application, and within it is a folder called "Forum", setup as a sub application in IIS 7. This forum package implements a custom provider for .net membership. The forum is running in .net 3.5. I'd like to setup the main site so that when users login, it logs them into both my site and the forum site. Both the main site and the forum have separate .Net membership tables. How can I specify which provider to use with formsauthentication? right now I have FormsAuthentication.SetAuthCookie(...); this, however, just uses my default provider and does nothing with the provider for the forum I tried setting the forum app and my web app to have the same cookie name, as well as setting the machinekey on each: <machineKey validationKey="AutoGenerate" validation="SHA1" /> no dice. I googled and didnt really come up with any example of how to use multiple providers like I want to. I updated my web.config to have both provideers but this is useless if I cannot specify in my code which one to use.

    Read the article

  • Linq In Clause & Predicate building

    - by Michael G
    I have two tables. Report and ReportData. ReportData has a constraint ReportID. How can I write my linq query to return all Report objects where the predicate conditions are met for ReportData? Something like this in SQL: SELECT * FROM Report as r Where r.ServiceID = 3 and r.ReportID IN (Select ReportID FROM ReportData WHERE JobID LIKE 'Something%') This is how I'm building my predicate: Expression<Func<ReportData, bool>> predicate = PredicateBuilder.True<ReportData>(); predicate = predicate.And(x => x.JobID.StartsWith(QueryConfig.Instance.DataStreamName)); var q = engine.GetReports(predicate, reportsDataContext); reports = q.ToList(); This is my query construction at the moment: public override IQueryable<Report> GetReports(Expression<Func<ReportData, bool>> predicate, LLReportsDataContext reportDC) { if (reportDC == null) throw new ArgumentNullException("reportDC"); var q = reportDC.ReportDatas.Where(predicate).Where(r => r.ServiceID.Equals(1)).Select(r => r.Report); return q; }

    Read the article

  • How to not use JavaScript with in the elements events attributes but still load via AJAX

    - by thecoshman
    I am currently loading HTMl content via AJAX. I have code for things on different elements onclick attributes (and other event attributes). It does work, but I am starting to find that the code is getting rather large, and hard to read. I have also read that it is considered bad practice to have the event code 'inline' like this and that I should really do by element.onclick = foobar and have foobar defined somewhere else. I understand how with a static page it is fairly easy to do this, just have a script tag at the bottom of the page and once the page is loaded have it executed. This can then attach any and all events as you need them. But how can I get this sort of affect when loading content via AJAX. There is also the slight case that the content loaded can very depending on what is in the database, some times certain sections of HTML, such as tables of results, will not even be displayed there will be something else entirely. I can post some samples of code if any body needs them, but I have no idea what sort of things would help people with this one. I will point out, that I am using Jquery already so if it has some helpful little functions that would be rather sweet¬

    Read the article

  • UITableView with Custom Cell only loads one row of data at a time

    - by Anton
    For my table, I've created a custom cell that has two labels. I also have two arrays with data, that I use to respectively assign each label in the cell. My problem, is that when the table is viewed, only the first row has data, and the rest of the data "sorta" loads when I start scrolling down. By "sorta", only the first row will load at a time. The only time I get to see all the data, is when I scroll to the bottom, then scroll up again. I've used the cellForRowAtIndexPath method to load the data into my cell. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CustomCellIdentifier = @"CustomCellIdentifier"; CustomCell *cell = (CustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomCellIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CustomCell" owner:self options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[CustomCell class]]) cell = (CustomCell *)oneObject; } NSUInteger row = [indexPath row]; //I have two tables to fill up, hence the tag if statement if ([tableView tag] == 0) { cell.arriveTimeLabel.text = [homeBoundTimes objectAtIndex:row]; cell.departTimeLabel.text = [homeBoundTimes objectAtIndex:row]; } else if ([tableView tag] == 1) { cell.arriveTimeLabel.text = [workBoundTimes objectAtIndex:row]; cell.departTimeLabel.text = [workBoundTimes objectAtIndex:row]; } return cell; } Is it possible that its only getting called when the top cell is being called?

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Use Django ORM as standalone [closed]

    - by KeyboardInterrupt
    Possible Duplicates: Use only some parts of Django? Using only the DB part of Django I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions: Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file? Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects? There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed? I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.: db/ __init__.py settings.py myScript.py orm/ __init__.py models.py And the basic essentials: # settings.py from django.conf import settings settings.configure( DATABASE_ENGINE = "postgresql_psycopg2", DATABASE_HOST = "localhost", DATABASE_NAME = "dbName", DATABASE_USER = "user", DATABASE_PASSWORD = "pass", DATABASE_PORT = "5432" ) # orm/models.py # ... # myScript.py # import models.. And whether you need to run something like: django-admin.py inspectdb ... (Oh, I'm running Windows if that changes anything regarding command-line arguments.).

    Read the article

  • Creating a [materialised]view from generic data in Oracle/Mysql

    - by Andrew White
    I have a generic datamodel with 3 tables CREATE TABLE Properties ( propertyId int(11) NOT NULL AUTO_INCREMENT, name varchar(80) NOT NULL ) CREATE TABLE Customers ( customerId int(11) NOT NULL AUTO_INCREMENT, customerName varchar(80) NOT NULL ) CREATE TABLE PropertyValues ( propertyId int(11) NOT NULL, customerId int(11) NOT NULL, value varchar(80) NOT NULL ) INSERT INTO Properties VALUES (1, 'Age'); INSERT INTO Properties VALUES (2, 'Weight'); INSERT INTO Customers VALUES (1, 'Bob'); INSERT INTO Customers VALUES (2, 'Tom'); INSERT INTO PropertyValues VALUES (1, 1, '34'); INSERT INTO PropertyValues VALUES (2, 1, '80KG'); INSERT INTO PropertyValues VALUES (1, 2, '24'); INSERT INTO PropertyValues VALUES (2, 2, '53KG'); What I would like to do is create a view that has as columns all the ROWS in Properties and has as rows the entries in Customers. The column values are populated from PropertyValues. e.g. customerId Age Weight 1 34 80KG 2 24 53KG I'm thinking I need a stored procedure to do this and perhaps a materialised view (the entries in the table "Properties" change rarely). Any tips?

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • How to reference using Entity Framework and Asp.Net Mvc 2

    - by Picflight
    Tables CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Email] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BirthDate] [smalldatetime] NULL, [CountryId] [int] NULL, CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED ([UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TeamMember]( [UserId] [int] NOT NULL, [TeamMemberUserId] [int] NOT NULL, [CreateDate] [smalldatetime] NOT NULL CONSTRAINT [DF_TeamMember_CreateDate] DEFAULT (getdate()), CONSTRAINT [PK_TeamMember] PRIMARY KEY CLUSTERED ([UserId] ASC, [TeamMemberUserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] dbo.TeamMember has both UserId and TeamMemberUserId as the index key. My goal is to show a list of Users on my View. In the list I want to flag, or highlight the Users that are Team Members of the LoggedIn user. My ViewModel public class UserViewModel { public int UserId { get; private set; } public string UserName { get; private set; } public bool HighLight { get; private set; } public UserViewModel(Users users, bool highlight) { this.UserId = users.UserId; this.UserName = users.UserName; this.HighLight = highlight; } } View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcPaging.IPagedList<MyProject.Mvc.Models.UserViewModel>>" %> <% foreach (var item in Model) { %> <%= item.UserId %> <%= item.UserName %> <%if (item.HighLight) { %> Team Member <% } else { %> Not Team Member <% } %> How do I toggle the TeamMember or Not If I add dbo.TeamMember to the EDM, there are no relationships on this table, how will I wire it to Users object? So I am comparing the LoggedIn UserId with this list(SELECT TeamMemberUserId FROM TeamMember WHERE UserId = @LoggedInUserId)

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • How can I make this SQL query more efficient? PHP.

    - by Alan Grant
    Hi all, I have a system whereby a user can view categories that they've subscribed to individually, and also those that are available in the region they belong in by default. So, the tables are as follows: Categories UsersCategories RegionsCategories I'm querying the db for all the categories within their region, and also all the individual categories that they've subscribed to. My query is as follows: Select * FROM (categories c) LEFT JOIN users_categories uc on uc.category_id = c.id LEFT JOIN regions_categories rc on rc.category_id = c.id WHERE (rc.region_id = ? OR uc.user_id = ?) At least I believe that's the query, I'm creating it using Cake's ORM layer, so the exact one is: $conditions = array( array( "OR" => array ( 'RegionsCategories.region_id' => $region_id, 'UsersCategories.user_id' => $user_id ) )); $this->find('all', $conditions); This turns out to be incredibly slow (sometimes around 20 seconds or so. Each table has around 5,000 rows). Is my design at fault here? How can I retrieve both the users' individual categories and those within their region all in one query without it taking ages? Thanks!

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • database structure

    - by jindalsyogesh
    I have a table named ActivityRecording. This table currently has 500,000 records. I need to add a lot of new inputs that relates to activityrecording table. The relation of activityrecording with these new input fields is 1 to 0,1. So, what's going to happen on screen is when user fills the ActivityRecording data, he will then be taken to a new page and this page will show a form based on the user's input (from a dropdown named service) in activityrecording. There will 6 different kinds of form (each form will have 7-8 inputs which includes textareas of size 5kb, textboxes and checkboxes). So, for one activityrecording user will fill one out of 6 forms. There are two ways I know (there could be more), I can design the data structure: Add all the inputs from all these 6 forms into the activityrecording table. So, columns belonging to 5 of these forms will be null in this table, only columns belonging to one of the forms will have values The other way would be add 6 new tables (one for each form) and add 6 foreign key columns to activityrecording table. So, out of 6 foreign keys, 5 will be null and one will actually point to a table Which approach is a better data structure design? Please take into consideration that number of rows in this table are 500,000 and are expected to grow at a faster rate now.

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • Organizing Eager Queries in an ObjectContext

    - by Nix
    I am messing around with Entity Framework 3.5 SP1 and I am trying to find a cleaner way to do the below. I have an EF model and I am adding some Eager Loaded entities and i want them all to reside in the "Eager" property in the context. We originally were just changing the entity set name, but it seems a lot cleaner to just use a property, and keep the entity set name in tact. Example: Context - EntityType - AnotherType - Eager (all of these would have .Includes to pull in all assoc. tables) - EntityType - AnotherType Currently I am using composition but I feel like there is an easier way to do what I want. namespace Entities{ public partial class TestObjectContext { EagerExtensions Eager { get;set;} public TestObjectContext(){ Eager = new EagerExtensions (this); } } public partial class EagerExtensions { TestObjectContext context; public EagerExtensions(TestObjectContext _context){ context = _context; } public IQueryable<TestEntity> TestEntity { get { return context.TestEntity .Include("TestEntityType") .Include("Test.Attached.AttachedType") .AsQueryable(); } } } } public class Tester{ public void ShowHowIWantIt(){ TestObjectContext context= new TestObjectContext(); var query = from a in context.Eager.TestEntity select a; } }

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >