Search Results

Search found 5611 results on 225 pages for 'contained databases'.

Page 190/225 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Sharepoint user details not visible to other users

    - by richardoz
    I am managing a SharePoint site that uses Form Based Authentication. We have several generic lists, document libraries and active task lists that users can create update and delete. Users can use the people pickers to select/search for everyone. But the users cannot see other users names, email addresses etc. in display lists or the people pickers. If I log in as the site collection administrator, I can see everyones details. So I know the data is available. Updated details on this problem (non-administrators) SharePoint users cannot see other users information. Example: User A assigns a task to user B. User A creates a new task and uses the people picker to find user B. User B is only visible by the login name “bname” and any information about user B is not visible or searchable within the people picker. Once user B is assigned the task, user A no longer sees the name in the task list – even though user A created it. No modified by, created by, assigned to or owner field data is visible to non-administrator users. Facts: Extranet site is configured to use Forms Based Authentication. Intranet uses windows based authentication Users of both the intranet and extranet have the same problem All databases are local The site uses SSRS integration SharePoint WSS on Windows 2003 Std -- After activating the verbose logging it looks like SharePoint is definately asking SQL server for only the user info for the currently logged in user: SELECT TOP 6 /lots-of-columns/ FROM UserData INNER MERGE JOIN Docs AS t1 ON ( 1 = 1 AND UserData.[tp_RowOrdinal] = 0 AND t1.SiteId = UserData.tp_SiteId AND t1.SiteId = @L2 AND t1.DirName = UserData.tp_DirName AND t1.LeafName = UserData.tp_LeafName AND t1.Level = UserData.tp_Level AND t1.IsCurrentVersion = 1 AND (1 = 1) ) LEFT OUTER JOIN AllUserData AS t2 ON ( UserData.[tp_Author]=t2.[tp_ID] AND UserData.[tp_RowOrdinal] = 0 AND t2.[tp_RowOrdinal] = 0 AND ( (t2.tp_IsCurrent = 1) ) AND t2.[tp_CalculatedVersion] = 0 AND t2.[tp_DeleteTransactionId] = 0x AND t2.tp_ListId = @L3 AND UserData.tp_ListId = @L4 AND t2.[tp_Author]=162 /* this is the currently logged in user */ ) WHERE (UserData.tp_IsCurrent = 1) AND UserData.tp_SiteId=@L2 AND (UserData.tp_DirName=@DN) AND UserData.tp_RowOrdinal=0 AND ( ( (UserData.[datetime1] IS NULL ) OR (UserData.[datetime1] = @L5DTP) ) AND t1.SiteId=@L2 AND (t1.DirName=@DN) ) ORDER BY UserData.[tp_Modified] Desc, UserData.[tp_ID] Asc Again, any ideas would be appreciated.

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • SQLce DAL Linq to Sql or EntityFramework

    - by bretddog
    Hi, I'm learning databases, using SqlCe, and need business object to database mapping. Currently I try to decide if to use Linq to Sql, or EntityFramework. (I understand a bit L2S, but haven't familiarized with EF yet) The program will only be debeloped and used by myself, so I have good control of the priorities: I don't need to consider potential change of database type or data storage type, as I'm quite certain SQLce will stay sufficient. I DO expect continued development and changes to the data scheme while the program is in active use; change business object properties (Hence database columns), and possibly overall table scheme. So old data must be transported to new scheme. I also want to keep a decent degree of layer separation DAL/BLL, although this may not be necessary, it is good for me to learn these principles. My question is: With these priorities, would I have any benefit by choosing either Linq2Sql vs. EntityFramwork? (and please explain why) Btw, the project involves very simple table scheme with only 4-5 tables and very simple relations. Thanks!

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • How to convert Unicode strings (\u00e2, etc) into NSString for display?

    - by karlbecker_com
    I am trying to support arbitrary unicode from a variety of international users. They have already put a bunch of data into sqlite databases on their iPhones, and now I want to capture the data into a database, then send it back to their device. Right now I am using a php page that is sending data back to from an internet mysql database. The data is saved in the mysql database properly, but when it's sent back it comes out as unicode text, such as Frank\u00e2\u0080\u0099s iPad instead of just Frank's iPad where the apostrophe should really be a curly apostrophe. The answer posted to another question indicates that there is no built-in Cocoa methods to convert the "\u00e2\u0080\u0099" portion of the unicode string from the webserver to an NSString object. Is this correct? That seems really surprising (and scarily disappointing), since Cocoa definitely allows input from many different Unicode characters, and I need to support any arbitrary language that I have never heard of, and all of the possible characters. I save them to and from the local sqlite database just fine now, but once I send it to a web server, then perhaps pull down different data, I want to ensure the data pulled from the web server is correctly formatted.

    Read the article

  • Implementing a 1 to many relationship with SQLite

    - by Patrick
    I have the following schema implemented successfully in my application. The application connects desk unit channels to IO unit channels. The DeskUnits and IOUnits tables are basically just a list of desk/IO units and the number of channels on each. For example a desk could be 4 or 12 channel. CREATE TABLE DeskUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE IOUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE RoutingTable (DeskUnitName TEXT, DeskUnitChannel NUMERIC, IOUnitName TEXT, IOUnitChannel NUMERIC); The RoutingTable 'table' then connects each DeskUnit channel to an IOUnit channel. For example the DeskUnit called "Desk1" channel 1 may route to IOunit name "IOUnit1" channel 2, etc. So far I hope this is pretty straightforward and understandable. The problem is, however, this is a strictly 1 to 1 relationship. Any DeskUnit channel can route to only 1 IOUnit channel. Now, I need to implement a 1 to many relationship. Where any DeskUnit channel can connect to multiple IOUnit channels. I realise I may have to rearrange the tables completely, but I am not sure the best way to go about this. I am fairly new to SQLite and databases in general so any help would be appreciated. Thanks Patrick

    Read the article

  • Unattended Install of SQL Server 2005 Express with LOCAL Server InstanceName

    - by Jeff
    I'm creating an install package using InnoSetup and installing SQL Server 2005 Express. Here's the code below that appears in my RUN section: Filename: "{app}\SQL Server 2005 Express\SQLEXPR.exe" ; Parameters: "-q /norebootchk /qn reboot=ReallySuppress addlocal=all INSTANCENAME=(LOCAL) SCCCHECKLEVEL=IncompatibleComponents:1;MDAC25Version:0 ERRORREPORTING=2 SQLAUTOSTART=1 SAPWD=passwordhere SECURITYMODE=SQL"; WorkingDir: {app}\SQL Server 2005 Express; StatusMsg: Installing Microsoft SQL Server 2005 Express... Please Wait...;Check:SQLVerifyInstall What I'm trying to accomplish is have the SQL Server package install but only have the instance name itself reference the name of the machine name and nothing more. What I'm receiving instead is a named instance instead of local such as MachineName\SQLEXPRESS which is not what I want to receive. I need a local instance instead of a named instance due to the way my code is written to be able to install and talk with the databases in question. I would change it, trust me, were it not the fact that this install package is a replacement to a previous package that used the MSDE installer. I have to be able to support both through code. Any suggestions are welcome but a clear and concise method to get the installer to quietly install using only the machine name is my main goal. Thanks for the help and support!

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Technology Plan - Which tools should I use?

    - by Armadillo
    Hi, Soon, I'll start my own software company. My primary product/solution will be a Billing/Invoice Software. In a near future, I pretend to expand this first module to an ERP. My app should be able to run as a stand-alone application and as a Web-based application (so there will be, probably two GUI for the same Database). My problem, now, is to choose the right tools; I'm talking about what programming language(s) should I use, what kind of database should I choose, and stuff like that. I'm primarily a VB6 programmer, so probably I'll choose the .net framework (vb/c#). But I'm seriously thinking about Java. Java has 2 "pros" that I really like: write once, run anywhere and it is free (I think...). I've been thinking about RIAs too, but I just don't have any substantial feedback about them... Then, I'll need a report tool. Crystal Reports? HTML based Reports? Other? Databases: I'm not sure if I should use SQL-Server Express or PostgreSQL (or other). I'd be happy to hear any comments and advices Thanks

    Read the article

  • How can I synchronise two datatables and update the target in the database?

    - by Craig
    I am trying to synchronise two tables between two databases. I thought that the best way to do this would be to use the DataTable.Merge method. This seems to pick up the changes, but nothing ever gets committed to the database. So far I have: string tableName = "[Table_1]"; string sql = "select * from " + tableName; using (SqlConnection sourceConn = new SqlConnection(ConfigurationManager.ConnectionStrings["source"].ConnectionString)) { SqlDataAdapter sourceAdapter = new SqlDataAdapter(); sourceAdapter.SelectCommand = new SqlCommand(sql, sourceConn); sourceConn.Open(); DataSet sourceDs = new DataSet(); sourceAdapter.Fill(sourceDs); using (SqlConnection targetConn = new SqlConnection(ConfigurationManager.ConnectionStrings["target"].ConnectionString)) { SqlDataAdapter targetAdapter = new SqlDataAdapter(); targetAdapter.SelectCommand = new SqlCommand(sql, targetConn); SqlCommandBuilder builder = new SqlCommandBuilder(targetAdapter); targetAdapter.InsertCommand = builder.GetInsertCommand(); targetAdapter.UpdateCommand = builder.GetUpdateCommand(); targetAdapter.DeleteCommand = builder.GetDeleteCommand(); targetConn.Open(); DataSet targetDs = new DataSet(); targetAdapter.Fill(targetDs); targetDs.Tables[0].TableName = tableName; sourceDs.Tables[0].TableName = tableName; targetDs.Tables[0].Merge(sourceDs.Tables[0]); targetAdapter.Update(targetDs.Tables[0]); } } At the present time, there is one row in the source that is not in the target. This row is never transferred. I have also tried it with an empty target, and nothing is transferred.

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • Set "Start With" value for Oracle sequence dynamically

    - by Allan
    I'm trying to create a release script that can be deployed on multiple databases, but where the data can be merged back together at a later date. The obvious way to handle this is to set the sequence numbers for production data sufficiently high in subsequent deployments to prevent collisions. The problem is in coming up with a release script that will accept the environment number and set the "Start With" value of the sequences appropriately. Ideally, I'd like to use something like this: ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] CREATE SEQUENCE seq1 START WITH &EnvironNum*100000; --[more scripting] This doesn't work because you can't evaluate a numeric expression in DDL. Another option is to create the sequences using dynamic SQL via PL/SQL. ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] EXEC execute immediate 'CREATE SEQUENCE seq1 START WITH ' || &EnvironNum*100000; --[more scripting] However, I'd prefer to avoid this solution as I generally try to avoid issuing DDL in PL/SQL. Finally, the third option I've come up with is simply to accept the Start With value as a substitution variable, instead of the environment number. Does anyone have a better thought on how to go about this?

    Read the article

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • PHP Transferring Photos From One Oracle Database Table to Another

    - by Jonathan Swift
    I am attempting to transfer a set of photos (blobs) from one table to another across databases. I'm nearly there, except for binding the photo parameter. I have the following code: $conn_db1 = oci_pconnect('username', 'password', 'db1'); $conn_db2 = oci_pconnect('username', 'password', 'db2'); $parse_db1_select = oci_parse($conn_db1, "SELECT REF PID, BINARY_OBJECT PHOTOGRAPH FROM BLOBS"); $parse_db2_insert = oci_parse($conn_db2, "INSERT INTO PHOTOGRAPHS (PID, PHOTOGRAPH) VALUES (:pid, :photo)"); oci_execute($parse_db1_select); while ($row = oci_fetch_assoc($parse_db1_select)) { $pid = $row['PID']; $photo = $row['PHOTOGRAPH']; oci_bind_by_name($parse_db2_insert, ':pid', $pid, -1, OCI_B_INT); // This line causes an error oci_bind_by_name($parse_db_insert, ':photo', $photo, -1, OCI_B_BLOB); oci_execute($parse_db2_insert); } oci_close($db1); oci_close($db2); But I get the following error, on the error line commented above: Warning: oci_execute() [function.oci-execute]: ORA-03113: end-of-file on communication channel Process ID: 0 Session ID: 790 Serial number: 118 Does anyone know the right way to do this?

    Read the article

  • Ruby on Rails Mongrel web server stuck when MySQL service is running

    - by Marcos Buarque
    Hi, I am a Ruby on Rails newbie and already have a problem. I have started the Mongrel web server and it works fine when MySQL service isn't running. But when MySQL is on, Mongrel stucks. It ceases from serving the pages. So far, I have tested the localhost:3000 URL. When MySQL is off, it serves the page. When I click "about application's environment", I get the messasge (of course) "Can't connect to MySQL server on 'localhost' (10061)". After starting the MySQL service and refreshing, I get no more answer and Mongrel does not serve the webpage. It gets stuck with no answer to the browser. Then I have to stop the webserver and restart it. I have installed mysql2 gem with the command gem install mysql2. I was able to create the _test and _development databases with the command line rake db:create. I have tested with MySQL root user and blank password and also tried with a superuser user I have created. No success. Here is the server log: ======================== Started GET "/rails/info/properties" for 127.0.0.1 at Fri Dec 24 17:41:25 -0200 2010 Mysql2::Error (Can't connect to MySQL server on 'localhost' (10061)): Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (5.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (35.0ms) ================= I am running on a Windows 7 environment with firewall down.

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • "database already closed" is shown using a custom cursor adapter

    - by kiduxa
    I'm using a cursor with a custom adapter that extends SimpleCursorAdapter: public class ListWordAdapter extends SimpleCursorAdapter { private LayoutInflater inflater; private Cursor mCursor; private int mLayout; private String[] from; private int[] to; public ListWordAdapter(Context context, int layout, Cursor c, String[] from, int[] to, int flags) { super(context, layout, c, from, to, flags); this.mCursor = c; this.inflater = LayoutInflater.from(context); this.mLayout = layout; this.from = from; this.to = to; } private static class ViewHolder { //public ImageView img; public TextView name; public TextView type; public TextView translate; } @Override public View getView(int position, View convertView, ViewGroup parent) { if (mCursor.moveToPosition(position)) { ViewHolder holder; if (convertView == null) { convertView = inflater.inflate(mLayout, null); holder = new ViewHolder(); // holder.img = (ImageView) convertView.findViewById(R.id.img_row); holder.name = (TextView) convertView.findViewById(to[0]); holder.type = (TextView) convertView.findViewById(to[1]); holder.translate = (TextView) convertView.findViewById(to[2]); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.name.setText(mCursor.getString(mCursor.getColumnIndex(from[0]))); holder.type.setText(mCursor.getString(mCursor.getColumnIndex(from[1]))); holder.translate.setText(mCursor.getString(mCursor.getColumnIndex(from[2]))); // holder.img.setImageResource(img_resource); } return convertView; } } And in the main activity I call it as: adapter = new ListWordAdapter(getSherlockActivity(), R.layout.row_list_words, mCursorWords, from, to, 0); When a modification in the list is made, I call this method: public void onWordSaved() { WordDAO wordsDao = new WordSqliteDAO(); Cursor mCursorWords = wordsDao.list(getSherlockActivity()); adapter.changeCursor(mCursorWords); } The thing here is that this produces me this exception: 10-29 11:14:33.810: E/AndroidRuntime(18659): java.lang.IllegalStateException: database /data/data/com.example.palabrasdeldia/databases/palabrasDelDia (conn# 0) already closed Complete stack trace: 10-29 11:14:33.810: E/AndroidRuntime(18659): FATAL EXCEPTION: main 10-29 11:14:33.810: E/AndroidRuntime(18659): java.lang.IllegalStateException: database /data/data/com.example.palabrasdeldia/databases/palabrasDelDia (conn# 0) already closed 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteDatabase.verifyDbIsOpen(SQLiteDatabase.java:2123) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteDatabase.lock(SQLiteDatabase.java:398) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteDatabase.lock(SQLiteDatabase.java:390) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteQuery.fillWindow(SQLiteQuery.java:74) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteCursor.fillWindow(SQLiteCursor.java:311) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.sqlite.SQLiteCursor.onMove(SQLiteCursor.java:283) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.database.AbstractCursor.moveToPosition(AbstractCursor.java:173) 10-29 11:14:33.810: E/AndroidRuntime(18659): at com.example.palabrasdeldia.adapters.ListWordAdapter.getView(ListWordAdapter.java:42) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.AbsListView.obtainView(AbsListView.java:2128) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.ListView.makeAndAddView(ListView.java:1817) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.ListView.fillSpecific(ListView.java:1361) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.ListView.layoutChildren(ListView.java:1646) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.AbsListView.onLayout(AbsListView.java:1979) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1542) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.layoutHorizontal(LinearLayout.java:1527) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.onLayout(LinearLayout.java:1316) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.support.v4.view.ViewPager.onLayout(ViewPager.java:1589) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1542) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1403) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.onLayout(LinearLayout.java:1314) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1542) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1403) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.LinearLayout.onLayout(LinearLayout.java:1314) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.widget.FrameLayout.onLayout(FrameLayout.java:400) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.View.layout(View.java:9593) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewGroup.layout(ViewGroup.java:3877) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewRoot.performTraversals(ViewRoot.java:1253) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.view.ViewRoot.handleMessage(ViewRoot.java:2017) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.os.Handler.dispatchMessage(Handler.java:99) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.os.Looper.loop(Looper.java:132) 10-29 11:14:33.810: E/AndroidRuntime(18659): at android.app.ActivityThread.main(ActivityThread.java:4028) 10-29 11:14:33.810: E/AndroidRuntime(18659): at java.lang.reflect.Method.invokeNative(Native Method) 10-29 11:14:33.810: E/AndroidRuntime(18659): at java.lang.reflect.Method.invoke(Method.java:491) 10-29 11:14:33.810: E/AndroidRuntime(18659): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:844) 10-29 11:14:33.810: E/AndroidRuntime(18659): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:602) 10-29 11:14:33.810: E/AndroidRuntime(18659): at dalvik.system.NativeStart.main(Native Method) If I use SimpleCursorAdapter directly instead of ListWordAdapter, it works fine. What's wrong with my custom adapter implementation? The line in bold in the stack trace corresponds with: if (mCursor.moveToPosition(position)) inside getView method. EDIT: I have created a custom class to manage DB operations as open and close: public class ConexionBD { private Context context; private SQLiteDatabase database; private DataBaseHelper dbHelper; public ConexionBD(Context context) { this.context = context; } public ConexionBD open() throws SQLException { this.dbHelper = DataBaseHelper.getInstance(context); this.database = dbHelper.getWritableDatabase(); database.execSQL("PRAGMA foreign_keys=ON"); return this; } public void close() { if (database.isOpen() && database != null) { dbHelper.close(); } } /*Getters y setters*/ public SQLiteDatabase getDatabase() { return database; } public void setDatabase(SQLiteDatabase database) { this.database = database; } } And this is my DataBaseHelper: public class DataBaseHelper extends SQLiteOpenHelper { private static final String DATABASE_NAME = "myDb"; private static final int DATABASE_VERSION = 1; private static DataBaseHelper sInstance = null; public static DataBaseHelper getInstance(Context context) { // Use the application context, which will ensure that you // don't accidentally leak an Activity's context. // See this article for more information: http://bit.ly/6LRzfx if (sInstance == null) { sInstance = new DataBaseHelper(context.getApplicationContext()); } return sInstance; } @Override public void onCreate(SQLiteDatabase database) { ... } .... And this is an example of how I manage a query: public Cursor list(Context context) { ConexionBD conexion = new ConexionBD(context); Cursor mCursor = null; try{ conexion.open(); mCursor = conexion.getDatabase().query(DataBaseHelper.TABLE_WORD , null , null, null, null, null, Word.NAME); if (mCursor != null) { mCursor.moveToFirst(); } }finally{ conexion.close(); } return mCursor; } For every connection to the DB I open it and close it.

    Read the article

  • Database schemas WAY out of sync - need to get up to date without losing data

    - by Zind
    The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem: What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added. Else, if there are other known solutions, I'd like to hear them as well.

    Read the article

  • Is dependency injection only for service type objects and singletons? (and NOT for gui?)

    - by sensui
    I'm currently experimenting with the Google's guice inversion of control container. I previously had singletons for just about any service (database, active directory) my application used. Now I refactored the code: all the dependencies are given as parameters to constructors. So far, so good. Now the hardest part is with the graphical user interface. I face this problem: I have a table (JTable) of products wrapped in an ProductFrame. I give the dependencies as parameters (EditProductDialog). @Inject public ProductFrame(EditProductDialog editProductDialog) { // ... } // ... @Inject public EditProductDialog(DBProductController productController, Product product) { // ... } The problem is that guice can't know what Product I have selected in the table, so it can't know what to inject in the EditProductDialog. Dependency Injection is pretty viral (if I modify one class to use dependency injection I also need to modify all the other classes it interacts with) so my question is should I directly instantiate EditProductDialog? But then I would have to pass manually the DBProductController to the EditProductDialog and I will also need to pass it to the ProductFrame and all this boils down to not using dependency injection at all. Or is my design flawed and because of that I can't really adapt the project to dependecy injection? Give me some examples of how you used dependency injection with the graphical user interface. All the examples found on the Internet are really simple examples where you use some services (mostly databases) with dependency injection.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >