Search Results

Search found 3710 results on 149 pages for 'all databases'.

Page 104/149 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • HTML5 Web Database Security

    - by Daniel Dimovski
    Should the HTML5 database be used to store any form of private information? Say we have the following scenario; You're browsing a web-mail client, that uses the web database to store mail drafts after you've written some information you close the web browser. What's to stop me from getting access to this information? If the webpage tries to clean out old information when opened a user-script could easily prevent the website from fully loading and then search through the database. Furthermore the names of databases and tables are easily available through the web-mail client's source. W3C Draft

    Read the article

  • Mongodb using db.help() on a particular db command

    - by user1325696
    When I type db.help() It returns DB methods: db.addUser(username, password[, readOnly=false]) db.auth(username, password) ... ... db.printShardingStatus() ... ... db.fsyncLock() flush data to disk and lock server for backups db.fsyncUnock() unlocks server following a db.fsyncLock() I'd like to find out how to get more detailed help for the particular command. The problem was with the printShardingStatus as it returned "too many chunks to print, use verbose if you want to print" mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "shard0000", "host" : "localhost:10001" } { "_id" : "shard0001", "host" : "localhost:10002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "dbTest", "partitioned" : true, "primary" : "shard0000" } dbTest.things chunks: shard0001 12 shard0000 19 too many chunks to print, use verbose if you want to for ce print I found that for that particular command I can specify boolean parameter db.printShardingStatus(true) which wasn't shown using db.help().

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Does a PHP library exist to work with PRC/.mobi files?

    - by Chris Clarke
    I'm writing a WordPress plugin to create an eBook from a selected category in most major eBook formats. I would like to support MobiPocket since that's the format used by the Kindle but I'm not sure how to go about it. From what I've read .mobi files are actually Palm Resource Databases (PRC) but I haven't been able to find a PHP class to work with these. I thought about using exec along with KindleGen but that would be undesirable as it would complicate initial setup. I've also thought about hosting a web service somewhere and using XML-RPC to accomplish this but that also complicates things. My question is: is there a PHP class/library (PHP-only preferred) that can work with PRC or even better, a class that specialises in creating MobiPocket ebooks? (needs to be open source since I'm releasing under the GPL) I've tried searching but haven't been able to find anything.

    Read the article

  • c# hierarchy collection library? - anyone know of one (e.g. GetDirectChildren, GetAllChildren, GetPa

    - by Greg
    Hi, Does anyone know of a solid C# library / approach to manage a hierarchy/web type collection? This would be a library that would basic consist of the concept of nodes & relationships, for example to model web pages/files linked under a URL, or modeling IT infrastructure. It would have key methods such as: Node.GetDirectParents() Node.GetRootParents() Node.GetDirectChildren() Node.GetAllChildren() So it's smarts would include the ability to "walk the tree" of nodes based on the relationships when someone does ask for "give me all the children under this node" for example. It ideally include a persistence layer, to save/retrieve such data to/from a databases (e.g. with a Nodes and Relationships table). Thanks

    Read the article

  • SQL server 2055 remote connection problem, cannot solve it help please thank you

    - by user287745
    note:- if this question does not fit this site please do not just close it but also redirect the question to the fitting sister site, thank you" the steps taken and the error are mentioned please help, i am stuck here! installed sql server 2005 express on both computers installed sql server management studio express on both computers ran each management studio and connect to instance sqlserver using windows authentication ( one computer connection example "A-63A9D4D7E7834\SQLEXPRESS" ) created a database in the databases named as "test1" created a few tables with data saved and exit. did everything what this site says " How to configure SQL Server 2005 to allow remote connections" [add h t t p here as spam prevention] ://support.microsoft.com/kb/914277/en-us" but i have just disable the firewalls completely :turn off connecting to A-63A9D4D7E7834 started "SQL Server Management Studio Express" on computer A-63A9D4D7E7834 sever name: "ALL-E425BE6C41D\SQLEXPRESS" authentication: "windows authentication" and CONNECT I GET THE FOLLOWING ERROR Cannot connect to ALL-E425BE6C41D\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed for user 'ALL-E425BE6C41D\Guest'. (Microsoft SQL Server, Error: 18456) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18456&LinkId=20476 BUTTONS: OK HELP

    Read the article

  • SQL Insert multilingual characters

    - by Usman Akram
    I am trying to create a table in my MS SQL database for Languages. I want to store an English name of Language and a local name of language in the database. i.e. Language, Language(local) English, English German, Deutsch Italian, Italiano Japanese, ??? ... ... I have 279 languages that I want to import, but when I import it shows '?????' for some like japanese, Russian and arabic etc The database Collation is Latin1_General_CI_AS. I would also like advise on multilingual websites; if i have a database of product descriptions and I want to have translation in multiple languages, should I go for separate databases or Can I have translation in one databse? (I prefer not to duplicate data!). Anything else to make sure users are able to write comments in different languages (char encoding on web?) and can be stored in database.

    Read the article

  • InnoDB or MyISAM - Why not both?

    - by Skoder
    Hey. I'm new to databases, and I've read various threads about which is better between InnoDB and MyISAM. It seems that the debates are to use or the other. Is it not possible to use both, depending on the table? What would be the disadvantages in doing this? As far as I can tell, the engine can be set during the CREATE TABLE command. Therefore, certain tables which are often read can be set to MyISAM, but tables that need transaction support can use InnoDB. I'm sure there must be a problem, otherwise this would be the ultimate answer :).

    Read the article

  • How to convert a MSSQL database (including procedures, functions and triggers) to a firebird databas

    - by user193655
    I am considering migrating to Firebird. To have a "quick start" approach I downloaded the trial of a conversion tool (DBConvert) and tried it. I just picked up a random tool, this tool doesn't convert procedures, functions and triggers (I don't think it is a limit of the trial since there is not an explicit reference to sp, sf and triggers in the link above). Anyway by trying that tool I had the message: "The DB cannot be converted succesfully because some FK names are too long." This is because in some tables I have FK whose description is 32 chars. Is this a real firebird limit or it is possible to overcome it somehow (of course renaming the FK is an extreme option because it is extra work)? Anyway how to convert a MS SQL DB fully to FIREBIRD? Is there a valid tool? Did someone succed in a conversion of non trivial databases?

    Read the article

  • Retrieving datatypes from underlying database

    - by H4mm3rHead
    Hi, Im making an application that displays information about an underlying database. The database can be anything, but is typically either Oracle, MSSQL or MySQL. I am trying to extract the datatype but cannot seem to get this right. I have a DbConnection because i dont know whether I need a OleDbConnection or an OdbcConnection. On this connection I make a GetSchema("Columns", "mytablename") query and gets the result back. It seems though that there are some inconsistencies with my datatypes or the query returns different datatypes for the different databases. For instance, in my MSSQL database I query and get an integer back (which seems to be the OleDbType) which I map to a datatype. My varchars is now of type char - no length - and this confuses me a bit. I guess my main question is something like: Is there any way of making a uniform way of extracting datatypes across providers and having an "accurate" representation of the datatype?

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • Does a TransactionScope that exists only to select data require a call to Complete()

    - by fordareh
    In order to select data from part of an application that isn't affected by dirty data, I create a TransactionScope that specifies a ReadUncommitted IsolationLevel as per the suggestion from Hanselman here. My question is, do I still need to execute the oTS.Complete() call at the end of the using block even if this transaction scope was not built for the purpose of bridging object dependencies across 2 databases during an Insert, Update, or Delete? Ex: List<string> oStrings = null; using (SomeDataContext oCtxt = new SomeDataContext (sConnStr)) using (TransactionScope oTS = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted })) { oStrings = oCtxt.EStrings.ToList(); oTS.Complete(); }

    Read the article

  • Virtual PC 2007 as programming environment

    - by Gern Blandston
    I'd like to create a VM in Virtual PC 2007 for use as a development environment/sandbox for an existing ASP.NET application in Visual Studio 2005/SQL Server 2005 (and VSS for source control). I'm thinking that I need to create a 'base' copy of the environment (with the os, Visual Studio, and Sql Server), and then copy that to a 'work' version that I do actual development in. I would be sharing this VM with one or two other developers who would be working on different parts of the app. Is this a good idea? What is the best way to get my app/databases in and out of the VM and the changes I make into VSS? Is it just a copy from the host location to the VM share and back again? How do I keep everything synchronized? Thanks!

    Read the article

  • StructureMap - Injecting a dependency into a base class?

    - by David
    In my domain I have a handful of "processor" classes which hold the bulk of the business logic. Using StructureMap with default conventions, I inject repositories into those classes for their various IO (databases, file system, etc.). For example: public interface IHelloWorldProcessor { string HelloWorld(); } public class HelloWorldProcessor : IHelloWorldProcessor { private IDBRepository _dbRepository; public HelloWorldProcessor(IDBRepository dbRepository) { _dbRepository = dbrepository; } public string HelloWorld(){ return _dbRepository.GetHelloWorld(); } } Now, there are some repositories that I'd like to be available to all processors, so I made a base class like this: public class BaseProcessor { protected ICommonRepository _commonRepository; public BaseProcessor(ICommonRepository commonRepository) { _commonRepository = commonRepository; } } But when my other processors inherit from it, I get a compiler error on each one saying that there's no constructor for BaseProcessor which takes zero arguments. Is there a way to do what I'm trying to do here? That is, to have common dependencies injected into a base class that my other classes can use without having to write the injections into each one?

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a common-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? Can PEAR interface with the MySqli prepared statements I currently use? Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) Will it slow down my site once I have a significantly large member base?

    Read the article

  • How to retrieve MYSQL records as an INSERT statement.

    - by Aglystas
    I'm trying come up with the best method of synchronizing particular rows of 2 different database tables. So, for example there's 2 product tables in different databases as such... Origin Database product{ merchant_id, product_id, ... additional fields } Destination Database product{ merchant_id product_id ... additional fields } So, the database schema is the same for both. However I'm looking to select records with a particular merchant_id, remove all records from the destination table that have that merchant_id and replace those records with records from the origin database of the same merchant_id. My first thought was using mysqldump, parsing out the create table statements, and only running the Insert Statements. Seems like a pain though. So I was wondering if there is a better technique to do this. I would think mysql has some method of creating INSERT statements as output from a SELECT statement, so you can define how to insert specific record information into a new db. Any help would be appreciated, thank you much.

    Read the article

  • The conceptual process of populating related tables in a database (MySql) from a CSV file

    - by user322772
    I'm new to relational databases and all the material I've read covered primary and foreign keys, normal forms, and joins but left out to populate the database once it's created. How do you import a CSV file so the fields match their related table? Say you were tying to build a beer database and had a CSV file with each line as a record. Header: brewer, beer_name, country, city, state, beer_category, beer_type, alcohol_content Record 1: Anheuser-Busch, Budweiser, United States, St. Louis, Mo, Pale lager, Regular, 5.0% Record 2: Anheuser-Busch, Bud Light, United States, St. Louis, Mo, Pale lager Light, 4.2% Record 3: Miller Brewing Company, Miller Lite, United States, Milwaukee, WI, Pale lager, Light, 4.2% You can create a "Brewer" table and a "Beer" table. When importing how does you connect the primary keys between the tables?

    Read the article

  • Loading .sql files from within PHP

    - by Josh Smeaton
    I'm creating an installation script for an application that I'm developing and need to create databases dynamically from within PHP. I've got it to create the database but now I need to load in several .sql files. I had planned to open the file and mysql_query it a line at a time - until I looked at the schema files and realised they aren't just one query per line. So, please.. how do I load an sql file from within PHP? (as phpMyAdmin does with it's import command).

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • MySql - only update some rows if the table exists - do not want an error thrown

    - by Pete Oakey
    I want to run an update query. The query will be run against multiple databases - not every database will have the table. I don't want the update to be attempted if the table does not exist. I don't want any error to be thrown - I just want the update to be ignored. Any ideas? EDIT: just to be clear - the query is executed in an automated deployment - no human interaction possible. EDIT2: the logic to say whether the update should run or not will need to be in the MySql query itself. This is not being run through a command prompt or batch or managed code.

    Read the article

  • SQL vs MySQL: Rules about aggregate operations and GROUP BY

    - by Phazyck
    In this book I'm currently reading while following a course on databases, the following example of an illegal query using an aggregate operator is given: Find the name and age of the oldest sailor. Consider the following attempt to answer this query: SELECT S.name, S.age FROM Sailors.S The intent is for this query to return not only the maximum age but also the name of the sailors having that age. However, this query is illegal in SQL--if the SELECT clause uses an aggregate operation, then it must use only aggregate operations unless the query contains a GROUP BY clause! Some time later while doing an exercise using MySQL, I faced a similar problem, and made a mistake similar to the one mentioned. However, MySQL didn't complain and just spit out some tables which later turned out not be what I needed. Is the query above really illegal in SQL, but legal in MySQL, and if so, why is that? In what situation would one need to make such a query?

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • About SQL Server security

    - by Felipe Fiali
    I have an ASP.NET application which runs under the Classic .NET AppPool in IIS. I have a report to render from my website. The problem is SQL Server keeps telling me that it failed to create a connection to the datasource, because login failed for user IUSR. After adding that user directly to the databse I could get the report to work, but I'm concerned about security. By doing that, am I opening my specified databases to all websites hosted on IIS? Or is that account identity-specific?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >