Search Results

Search found 30474 results on 1219 pages for 'relational database'.

Page 580/1219 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • where should I put the EF entity and data annotations in asp.net mvc + entity framework project

    - by giddy
    So I have a DataEntity class generated by EntityFramework4 for my sqlexpress08 database. This data context is exposed via a WCF Data Service/Odata to silverlight and win forms clients. Should the data entities + edmx file (generated by EF4) go in a separate class library? The problem here then is I would specify data annotations for a few entities and then some of them would require specific MVC attributes (like CompareAttribute) so the class library would also reference mvc dlls. There also happen to be entity users which will be encapsulated or wrapped into an IIdentity in the website. So its pretty tied to the mvc website. Or Should it maybe go in a Base folder in the mvc project itself? Mostly the website is data driven around the database, like approve users, change global settings etc. The real business happens in the silverlight and win forms apps. Im using mvc3 rc2 with Razor. Thanks

    Read the article

  • Evaluating Django Chained QuerySets Locally

    - by jnadro52
    Hello All: I am hoping someone can help me out with a quick question I have regarding chaining Django querysets. I am noticing a slow down because I am evaluating many data points in the database to create data trends. I was wondering if there was a way to have the chained filters evaluated locally instead of hitting the database. Here is a (crude) example: pastries = Bakery.objects.filter(productType='pastry') # <--- will obviously always hit DB, when evaluated cannoli = pastries.filter(specificType='cannoli') # <--- can this be evaluated locally instead of hitting the DB when evaluated, as long as pastries was evaluated? I have checked the docs and I do not see anything specifying this, so I guess it's not possible, but I wanted to check with the 'braintrust' first ;-). BTW - I know that I can do this myself by implementing some methods to loop through these datapoints and evaluate the criteria, but there are so many datapoints that my deadline does not permit me manually implementing this. Thanks in advance.

    Read the article

  • Flashing and closing PyS60 SIS

    - by Matrich
    I created a py script for S60 2nd Edition FP3 with a background image, database, sound files which are stored in a folder on c drive. After several hours, I have managed to convert it into sis which I sent to my mobile phone. First, I installed the sis on the external memory and it failed to work. I read online that it should be on the same drive as python runtime. So I installed it on phone memory but it just flashes and then closes. I made another sis with a database and it worked so I suspect it could be because it cant find the background image and sound files in the folder. How can I know the cause of this? Also, how can I include the background image and sound files with the sis so that they are automatically installed when installing the sis file?

    Read the article

  • .Remove(object) on a List<T> returned from LINQ to SQL compiled query won't delete the Object right

    - by soldieraman
    I am returning two lists from the database using LINQ to SQL compiled query. While looping the first list I remove duplicates from the second list as I dont want to process already existing objects again. eg. //oldCustomers is a List returned by my Compiled Linq to SQL Statmenet that I have added a .ToList() at the end to //Same goes for newCustomers for (Customer oC in oldCustomers) { //Do some processing newCustomers.Remove(newCusomters.Find(nC=> nC.CustomerID == oC.CusomterID)); } for (Cusomter nC in newCustomers) { //Do some processing } DataContext.SubmitChanges() I expect this to only save the changes that have been made to the customers in my processing and not Remove or Delete any of my customers from the database. Correct? I have tried it and it works fine - but I am trying to know if there is any rare case it might actually get removed

    Read the article

  • building a web application in eclipse

    - by Noona
    As a concluding assignment for the technologies taught in a data management course, we have to write a web application using the technologies taught throughout the course, this mostly includes xhtml, css, JSP, servelets, JDBC, AJAX, webservices. the project will eventually be deployed using tomcat. we are given the option of choosing the technologies that we see fit. since this is my first time developing a web application I am having some uncertainties about where to start, so for example now I am writing the object classes that will be used in the database and implementing the operations that will be performed on the database, but I am not sure about how to make these operations available to a client through the website, I mean I think I have to write a servlet through which I can extract the request parameters and set the response accordingly, but I would still like a more specific overview of what I am going to do, so if someone can link me to a tutorial with an example that makes use of these technologies while illustrating the stages of the design so that I can see how all these things are linked together in a web project. thanks

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • What can I do about a SQL Server ghost FK constraint?

    - by rcook8601
    I'm having some trouble with a SQL Server 2005 database that seems like it's keeping a ghost constraint around. I've got a script that drops the constraint in question, does some work, and then re-adds the same constraint. Normally, it works fine. Now, however, it can't re-add the constraint because the database says that it already exists, even though the drop worked fine! Here are the queries I'm working with: alter table individual drop constraint INDIVIDUAL_EMP_FK ALTER TABLE INDIVIDUAL ADD CONSTRAINT INDIVIDUAL_EMP_FK FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEE After the constraint is dropped, I've made sure that the object really is gone by using the following queries: select object_id('INDIVIDUAL_EMP_FK') select * from sys.foreign_keys where name like 'individual%' Both return no results (or null), but when I try to add the query again, I get: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "INDIVIDUAL_EMP_FK". Trying to drop it gets me a message that it doesn't exist. Any ideas?

    Read the article

  • How to manipulate a gridview in asp.net when the columns in the gridview are generated during runtim

    - by Nandini
    Hi guys, In my application,i have a gridview in which columns are created at runtime.Actually these columns are created when i am entering data in a database table. My gridview is like this:- Description | Column1 | Column2 | Column3 | ...... |....... |......... Input volt | 55 | 56 | 553 |........ Output volt | 656 | 45 | 67 | where Column1,Column2,column3 may vary during runtime.i need to enter values to these columns during runtime.But i cant bind these columns because these are created during runtime. The first column "Description" will not change.It will remain constant.For each description,there will be values for each column.At last these data will be saved to the database.How to add columns in the gridview during runtime?

    Read the article

  • finding ALL cycles in a huge sparse matrix

    - by Andy
    Hi there, First of all I'm quite a Java beginner, so I'm not sure if this is even possible! Basically I have a huge (3+million) data source of relational data (i.e. A is friends with B+C+D, B is friends with D+G+Z (but not A - i.e. unmutual) etc.) and I want to find every cycle within this (not necessarily connected) directed graph. I've found this thread (http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) which has pointed me to Donald Johnson's (elementary) cycle-finding algorithm which, superficially at least, looks like it'll do what I'm after (I'm going to try when I'm back at work on Tuesday - thought it wouldn't hurt to ask in the meanwhile!). I had a quick scan through the code of the Java implementation of Johnson's algorithm (in that thread) and it looks like a matrix of relations is the first step, so I guess my questions are: a) Is Java capable of handling a 3+million*3+million matrix? (was planning on representing A-friends-with-B by a binary sparse matrix) b) Do I need to find every connected subgraph as my first problem, or will cycle-finding algorithms handle disjoint data? c) Is this actually an appropriate solution for the problem? My understanding of "elementary" cycles is that in the graph below, rather than picking out A-B-C-D-E-F it'll pick out A-B-F, B-C-D etc. but that's not the end of the world given the task. E / \ D---F / \ / \ C---B---A d) If necessary, I can simplify the problem by enforcing mutuality in relations - i.e. A-friends-with-B <== B-friends-with-A, and if really necessary I can maybe cut down the data size, but realistically it is always going to be around the 1mil mark. z) Is this a P or NP task?! Am I biting off more than I can chew? Thanks all, any help appreciated! Andy

    Read the article

  • Should an event-sourced aggregate root have access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store by id for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • Can Hibernate default a Null String to Empty String

    - by sliver
    In our application we are pulling data from a DB2 mainframe database. If the database has "low values" in a field, hibernate sends a "null" value in the object. This occurs even if the column is defined as "not null". As we are doing XML parsing on this, Castor is having trouble with it. I would like to fix this in Hibernate. Also, all of the hibernate hbm files are generated, so we can't mess with them (they are regened from time to time.) Any way to intercept all Strings and replace nulls with ""?

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • How to consolidate multiple LOG files into one .LDF file in SQL2000

    - by John Galt
    Here is what sp_helpfile says about my current database (recovery model is Simple) in SQL2000: name fileid filename size maxsize growth usage MasterScratchPad_Data 1 C:\SQLDATA\MasterScratchPad_Data.MDF 6041600 KB Unlimited 5120000 KB data only MasterScratchPad_Log 2 C:\SQLDATA\MasterScratchPad_Log.LDF 2111304 KB Unlimited 10% log only MasterScratchPad_X1_Log 3 E:\SQLDATA\MasterScratchPad_X1_Log.LDF 191944 KB Unlimited 10% log only I'm trying to prepare this for a detach then an attach to a sql2008 instance but I don't want to have the 2nd .LDF file (I'd like to have just one file for the log). I have backed up the database. I have issued: BACKUP LOG MasterScratchPad WITH TRUNCATE_ONLY. I have run multiple DBCC SHRINKFILE commands on both of the LOG files. How can I accomplish this goal of having just one .LDF? I cannot find anything on how to delete the one with fileid of 3 and/or how to consolidate multiple files into one log file.

    Read the article

  • Core Data: Overkill for simple, static UITableView-based iPhone App?

    - by David Foster
    Hello! I have a rather simple iPhone app consisting of numerous views containing a single, grouped table view. These views are held together in navigation controllers which are grouped in a tab bar. Simple stuff. My table views do little more than list text (like "Dog", "Cat" and "Weasel") and this data is being served from a collection of plists. It's perhaps worth mentioning too that these tables are 'static' in the sense that their data is pre-determined and will only ever be amended—and if so, very rarely indeed—by the developer (in this case, moi). This rudimentary approach has reached its limits though, and I think I'm going to need something a bit more relational. I have worked a tad with Core Data in the past, but only with apps whose data is determined by user input. I have four closely related questions: Is Core Data overkill for an app consisting mainly of a selection of simple table views? Do you recommend using Core Data to manage data which is predetermine and extremely unlikely to ever change? Can one lock Core Data down so that its data can't change, thereby relinquishing my responsibility as the developer to handle the editing and saving of the managed object context? How do I go about giving Core Data my predetermined data, and in a format I know that it can work with? Thanks a bunch guys.

    Read the article

  • Upgraded activerecord-sqlserver-adapter from 2.2.22 to 2.3.8 and now getting an ODBC error

    - by stuartc
    I have been using MSSQL 2005 with Rails for quite a while now, and decided to bump my gems up on one of my projects and ran into a problem. I moved from 2.2.22 to 2.3.8 (latest as of writing) and all of a sudden I got this: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length I'm using a DSN connection with FreeTDS my database.yml looks like this: adapter: sqlserver mode: ODBC dsn: 'DRIVER=FreeTDS;TDSVER=7.0;SERVER=10.0.0.5;DATABASE=db;Port=1433;UID=user;PWD=pwd;' Now in the mean time I moved back to 2.2.22 and there are no deprecation warnings and everything seems fine but obviously for the sake of being up to date, any ideas what could have changed in the adaptor that could cause this?

    Read the article

  • End User Ad-Hoc Reporting Tool: Microsoft SQL Server Management Studio or Microsoft Access?

    - by schultkl
    Our centralized IT department has suggested two primary ad hoc query tools for our general user base of approximately 200 staff members: Microsoft SQL Server Management Studio 2008 (SSMS) Microsoft Access 2003 Environment The backend database is a read-only Microsoft SQL Server 2005 database. The schema is 400+ tables; allowing access to the raw data for our general staff would be a disaster. We will be building an "abstraction layer" over the raw data for our general staff to run ad hoc queries against. The abstraction layer will most likely contain a number of views. A number of users have basic knowledge in Microsoft Access; none have used SSMS. Which of the above tools (or alternative) would be best for a decidedly non-techie user base of approximately 200 people? What are the pros and cons of each? Also, the IT department has suggested teaching people T-SQL so they may use SSMS. Is this reasonable?

    Read the article

  • Syncronizing mobile phone contacts with contacts from social networks

    - by Pentium10
    I retrieve a JSON list of contacts from a social network site. It contains firstname, lastname, displayname, data1, data2, etc... What is the efficient way to quickly lookup my local phone contacts database and "match" them based on their name. Since there are firstname, lastname and displayname this can vary. What do you think, how can the best match be achieved? Also how do I make sure I don't parse for each JSON item the whole database I want to avoide having JSON_COUNT x MOBILE COUNT steps.

    Read the article

  • Howt o get the t-sql statements being used to Update a DataSet

    - by Dennis
    I've a c# DataSet object with one table in it, and put some data in it and i've made some changes to that dataset (by code). Is there a way to get the actual t-sql queries this dataset will perform on the sql server when I update the dataset to the database with code that looks something like this: var dataAdapter = new SqlDataAdapter(cmdText, connection); var affected = dataAdapter.Update(updatedDataSet); I want to know what queries this dataset will fire to the database so I can log these changes to a logfile in my c# program.

    Read the article

  • Iphone & Web App synch

    - by Jack Welsh
    I am trying to build an Iphone App client for our CRM solution so our sales people would be able to access the information available in our CRM through an Iphone App. I am trying to find out what is the recommended approach to architect the application when it comes to the database. I am not sure where should the data reside, should I save a subset of the data on the iPhone, or should I just rely on webservices and pull the information I need from the database whenever I need it. Also, is there any best practices or frameworks to build such applications on the Iphone.

    Read the article

  • Finding Cities within 'X' Kilometers (or Miles)

    - by Mike Curry
    This may or may not be clear, leave me a comment if I am off base, or you need more information. Perhaps there is a solution out there already for what I want in PHP. I am looking for a function that will add or subtract a distance from a longitude OR latitude value. Reason: I have a database with all Latitudes and Longitudes in it and want to form a query to extract all cities within X kilometers (or miles). My query would look something like this... Select * From Cities Where (Longitude > X1 and Longitude < X2) And (Latitude > Y1 and Latitude < Y2) Where X1 = Longitude - (distance) Where X2 = Longitude + (distance) Where Y1 = Latitude - (distance) Where Y2 = Latitude + (distance) I am working in PHP, with a MySql Database. Open to any suggestions also! :)

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Are there Python ORMs out there that support multiple independent databases concurrently in use?

    - by sdt
    I'm writing an application in Python where I wish to use sqlite as the backing store for documents edited by the app, with documents generally living in memory, but being saved to disk-based databases when the application saves. Ideally I'd like to use something like an ORM to make access to the data from my Python application code simple. Unfortunately it seems like the majority of Python ORMs, including SQLAlchemy, SQLObject, Django, and Storm, associate the database connection (or engine or whatever) with the classes representing table data, rather than instances of those classes. This restricts these ORMs to working with a single database connection across all instances. Since I'd like to support having multiple documents open simultaneously, this isn't going to work for me. Are there any ORMs out there that support this usage model in Python? Bazaar seems to support this, but it's quite out of date, and at first glance appears to have some other shortcomings. Thanks for any suggestions!

    Read the article

  • Counting the most tagged tag with MySQL

    - by Jack W-H
    Hi folks My problem is that I'm trying to count which tag has been used most in a table of user-submitted code. But the problem is with the database structure. The current query I'm using is this: SELECT tag1, COUNT(tag1) AS counttag FROM code GROUP BY tag1 ORDER BY counttag DESC LIMIT 1 This is fine, except, it only counts the most often occurence of tag1 - and my database has 5 tags per post - so there's columns tag1, tag2, tag3, tag4, tag5. How do I get the highest occurring tag value from all 5 columns in one query? Jack

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >