Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 407/1698 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Is it possible to open a sqlite database from within microsoft sql management studio?

    - by Brian T Hannan
    Is there a way to open a .db file (sqlite database file) from within microsoft sql management studio? Right now we have a process that will grab the data from a microsoft sql server database and put it into a sqlite database file that will be used by an application later on. Is there a way to open the sqlite database file so that it can be compared to the data inside the sql server database ... using only one sql query? Is there a plug-in for microsoft sql management studio? Or maybe there is another way to do this same task using only one query. Right now we have to write two scripts ... one for sql server database and one for sqlite database ... then take the output from each in the same format and put them each in their own OpenOffice spreadsheet file. Finally, we compare the two files to see if there are any differences. Perhaps there's a better way to do this. P.S. Alot of applications use sqlite internally: Well-Known Users Of SQLite

    Read the article

  • Details of 5GB and 50GB SQL Azure databases have now been released, along with new price points

    - by Eric Nelson
    Like many others signed up to the Windows Azure Platform, I received an email overnight detailing the upcoming database size changes for SQL Azure. I know from our work with early adopters over the last 12 months that the 1GB and 10GB limits were sometimes seen as blockers, especially when migrating existing application to SQL Azure. On June 28th 2010, we will be increasing the size limits: SQL Azure Web Edition database from 1 GB to 5 GB SQL Azure Business Edition database will go from 10 GB to 50 GB Along with these changes comes new price points, including the option to increase in increments of 10GB: Web Edition: Up to 1 GB relational database = $9.99 / month Up to 5 GB relational database = $49.95 / month Business Edition: Up to 10 GB relational database = $99.99 / month Up to 20 GB relational database = $199.98 / month Up to 30 GB relational database = $299.97 / month Up to 40 GB relational database = $399.96 / month Up to 50 GB relational database = $499.95 / month Check out the full SQL Azure pricing. Related Links: http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • Eight New Oracle Database Assemblies Ready to Run In Your Oracle VM Cloud with Oracle Enterprise Manager 12c

    - by Adam Hawley
    By Sudip Datta, Senior Director, Oracle Enterprise Manager Product Management This week, 8 database virtual assemblies were released via EM 12c Self-Update. The database assemblies are already patched to Oracle recommended levels. Customers running EM 12c in online mode (i.e. connected to My Oracle Support) will see the assemblies in their EM console (screenshot below). They can then deploy the Assemblies using the Self-Service Provisioning outlined in the "Cloud Administration Guide". The EM12c agent will be deployed along with the assemblies, so the databases will be managed automatically from the onset. You can also get a general demo of the cloud management features (including assembly deployment) in http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html. More database and middleware assemblies will follow soon.

    Read the article

  • Why Move My Oracle Database to New SPARC Hardware?

    - by rickramsey
    If didn't manage to catch all the news during the proverbial Firehose Down the Throat that is Oracle OpenWorld, you'll enjoy these short recaps from Brad Carlile. He makes things clear in just a couple of minutes. photograph copyright by Edge of Day Photography, with permission Video: Latest Improvements to Oracle SPARC Processors with Brad Carlile T5, M5, and M6. Three wicked fast processors that Oracle announced over the last year. Brad Carlile explains how much faster they are, and why they are better than previous versions. Video: Why Move Your Oracle Database to SPARC Servers with Brad Carlile If I'm happy with how my Oracle Database 11g is performing, why should I deploy it on the new Oracle SPARC hardware? For the same reasons that you would want to buy a sports car that goes twice as fast AND gets better gas mileage, Brad Carlile explains. Well, if there are such dramatic performance improvements and cost savings, then why should I move up to Oracle Database 12c? -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • PHP ORM style of querying

    - by Petah
    Ok so I have made an ORM library for PHP. It uses syntax like so: *(assume that $business_locations is an array)* Business::type(Business:TYPE_AUTOMOTIVE)-> size(Business::SIZE_SMALL)-> left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, Business::id())-> left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id())-> where(Business::location_id(), SQL::in($business_locations))-> group_by(Business::id())-> select(SQL::count(BusinessOwner::id()); Which can also be represented as: $query = new Business(); $query->set_type(Business:TYPE_AUTOMOTIVE); $query->set_size(Business::SIZE_SMALL); $query->left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, $query->id()); $query->left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id()); $query->where(Business::location_id(), SQL::in($business_locations)); $query->group_by(Business::id()); $query->select(SQL::count(BusinessOwner::id()); This would produce a query like: SELECT COUNT(`business_owners`.`id`) FROM `businesses` LEFT JOIN `business_owners` ON `business_owners`.`business_id` = `businesses`.`id` LEFT JOIN `owners` ON `owners`.`id` = `business_owners`.`owner_id` WHERE `businesses`.`type` = 'automotive' AND `businesses`.`size` = 'small' AND `businesses`.`location_id` IN ( 1, 2, 3, 4 ) GROUP BY `businesses`.`id` Please keep in mind that the syntax might not be prefectly correct (I only wrote this off the top of my head) Any way, what do you think of this style of querying? Is the first method or second better/clearer/cleaner/etc? What would you do to improve it?

    Read the article

  • Could someone help me understand SQL TDE Database encryption?

    - by SLC
    I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly. If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do select * from x and the data is revealed. Sorry my question is a bit scattered, I am just very confused by the article.

    Read the article

  • What simple offline GUI database should I use for this application?

    - by gcc
    I am looking an open source application. Application should have : * database support ( create two or three table ) * GUI ( what I have created should be seen ) Example : Assume that I have created a table ; X_table : | A | B | C | D | After creating table, I am loading data | A | B | C | D | | 1 | 11 | b | f | - | 3 | 12 | a | o | - data | 4 | 13 | r | o | - When I am opening application not for loading data, I want see data in graphical interface. Are there any open source application which have above feature ? Application can be so simple, * no internet connection * support only one database * static table creation ( once created never changed ) Application can be run Ubuntu 12.04 and/or Windows. In other words, I am wanting database viewer and editor. EDIT: I should load pdf file, image etc. or give path of the file to the application. This link can be reference to my question . ( Interface should be like this, just a list )

    Read the article

  • OOP Structure for web application

    - by Query
    Ok so I have a website in which users complete tasks to earn points. When they earn enough points, they rise in rank. The site from my understanding is very basic and only executes one query or two queries at most a page. There is a user table, a support ticket table, and an orders table. All of these contain a relational row for username. Our class was familiarized with OOP back in highschool with Java but that was for video games and I could grasp the concept on why you would need a class player and class enemy. However I don't understand it's web application. At least not in my situation. I understand the user class might contain stuff like: getUsername getPoints getEmail setEmail addPoints (does this belong here? OR only things the user can manipulate should be here?) etc.. But I'm at a loss with everything else such as user registration. Can you help give me a wire framework that I could wrap my head around? Pointing me to a good eBook would help greatly

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • Value gets changed upon comiting. | CallableStatements

    - by Triztian
    Hello, I having a weird problem with a DAO class and a StoredProcedure, what is happening is that I use a CallableStatement object which takes 15 IN parameters, the value of the field id_color is retrieved correctly from the HTML forms it even is set up how it should in the CallableStatement setter methods, but the moment it is sent to the database the id_color is overwriten by the value 3 here's the "context": I have the following class DAO.CoverDAO which handles the CRUD operations of this table CREATE TABLE `cover_details` ( `refno` int(10) unsigned NOT NULL AUTO_INCREMENT, `shape` tinyint(3) unsigned NOT NULL , `id_color` tinyint(3) unsigned NOT NULL ', `reversefold` bit(1) NOT NULL DEFAULT b'0' , `x` decimal(6,3) unsigned NOT NULL , `y` decimal(6,3) unsigned NOT NULL DEFAULT '0.000', `typecut` varchar(10) NOT NULL, `cornershape` varchar(20) NOT NULL, `z` decimal(6,3) unsigned DEFAULT '0.000' , `othercornerradius` decimal(6,3) unsigned DEFAULT '0.000'', `skirt` decimal(5,3) unsigned NOT NULL DEFAULT '7.000', `foamTaper` varchar(3) NOT NULL, `foamDensity` decimal(2,1) unsigned NOT NULL , `straplocation` char(1) NOT NULL ', `straplength` decimal(6,3) unsigned NOT NULL, `strapinset` decimal(6,3) unsigned NOT NULL, `spayear` varchar(20) DEFAULT 'Not Specified', `spamake` varchar(20) DEFAULT 'Not Specified', `spabrand` varchar(20) DEFAULT 'Not Specified', PRIMARY KEY (`refno`) ) ENGINE=MyISAM AUTO_INCREMENT=143 DEFAULT CHARSET=latin1 $$ The the way covers are being inserted is by a stored procedure, which is the following: CREATE DEFINER=`root`@`%` PROCEDURE `putCover`( IN shape TINYINT, IN color TINYINT(3), IN reverse_fold BIT, IN x DECIMAL(6,3), IN y DECIMAL(6,3), IN type_cut VARCHAR(10), IN corner_shape VARCHAR(10), IN cutsize DECIMAL(6,3), IN corner_radius DECIMAL(6,3), IN skirt DECIMAL(5,3), IN foam_taper VARCHAR(7), IN foam_density DECIMAL(2,1), IN strap_location CHAR(1), IN strap_length DECIMAL(6,3), IN strap_inset DECIMAL(6,3) ) BEGIN INSERT INTO `dbre`.`cover_details` (`dbre`.`cover_details`.`shape`, `dbre`.`cover_details`.`id_color`, `dbre`.`cover_details`.`reversefold`, `dbre`.`cover_details`.`x`, `dbre`.`cover_details`.`y`, `dbre`.`cover_details`.`typecut`, `dbre`.`cover_details`.`cornershape`, `dbre`.`cover_details`.`z`, `dbre`.`cover_details`.`othercornerradius`, `dbre`.`cover_details`.`skirt`, `dbre`.`cover_details`.`foamTaper`, `dbre`.`cover_details`.`foamDensity`, `dbre`.`cover_details`.`strapLocation`, `dbre`.`cover_details`.`strapInset`, `dbre`.`cover_details`.`strapLength` ) VALUES (shape,color,reverse_fold, x,y,type_cut,corner_shape, cutsize,corner_radius,skirt,foam_taper,foam_density, strap_location,strap_inset,strap_length); END As you can see basically it just fills each field, now, the CoverDAO.create(CoverDTO cover) method which creates the cover is like so: public void create(CoverDTO cover) throws DAOException { Connection link = null; CallableStatement query = null; try { link = MySQL.getConnection(); link.setAutoCommit(false); query = link.prepareCall("{CALL putCover(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"); query.setLong(1,cover.getShape()); query.setInt(2,cover.getColor()); query.setBoolean(3, cover.getReverseFold()); query.setBigDecimal(4,cover.getX()); query.setBigDecimal(5,cover.getY()); query.setString(6,cover.getTypeCut()); query.setString(7,cover.getCornerShape()); query.setBigDecimal(8, cover.getZ()); query.setBigDecimal(9, cover.getCornerRadius()); query.setBigDecimal(10, cover.getSkirt()); query.setString(11, cover.getFoamTaper()); query.setBigDecimal(12, cover.getFoamDensity()); query.setString(13, cover.getStrapLocation()); query.setBigDecimal(14, cover.getStrapLength()); query.setBigDecimal(15, cover.getStrapInset()); query.executeUpdate(); link.commit(); } catch (SQLException e) { throw new DAOException(e); } finally { close(link, query); } } The CoverDTO is made of accessesor methods, the MySQL object basically returns the connection from a pool. Here is the pset Query with dummy but appropriate data: putCover(1,10,0,80.000,80.000,'F','Cut',0.000,0,15.000,'4x2',1.5,'A',10.000,5.000) As you can see everything is fine just when I write to the DB instead of 10 in the second parameter a 3 is written. I have done the following: Traced the id_color value to the create method, still got replaced by a 3 Hardcoded the value in the DAO create method, still got replaced by a 3 Called the procedure from the MySQL Workbench, it worked fined so I assume something is happening in the create method, any help is really appreciated.

    Read the article

  • Cannot add SourceSafe Database as Visual Studio 2010 source control.

    - by CletusLoomis
    My issue is that I cannot add SourceSafe Database for source control within Visual Studio 2010. Our team was initially using VSS for source control in Visual Studio 2010. During an evaluation of TFS, I switched my source control to TFS. It will be a few weeks before a decision is made on TFS, so I needed to switch my source control back to VSS. However I'm now unable to add a SourceSafe Database in Visual Studio. Steps to Reproduce in Visual Studio 2010: 1) Access the 'Open SourceSafe Database' form via Tools-Options-Source Control-Plug-in Settings--Advanced or via File-Source Control 2) The list of available database is blank so I choose 'Browse'. 3) I browse to the srcsafe.ini file for my VSS database and select it. 4) I'm promted to confirm the Database Name - Click OK. 5) The database does not appear in the 'Open SourceSafe' Database form. The list of available databases is still blank. Note that I can add the database fine outside of Visual Studio using VSS directly. However the databases I add via VSS do not appear in the Visual Studio forms. I'm suspicious that this is related to "down-grading" from TFS to VSS which may not have been heavily tested at MS. Any assistance is appreciated.

    Read the article

  • Create an SQL Express 2008 database in C# code, but login fails when trying to connect with a sysadm

    - by Andrés Gonzales
    I have a piece of code that creates an SQL Server Express 2008 in runtime, and then tries to connect to it to execute a database initialization script in Transact-SQL. The code that creates the database is the following: private void CreateDatabase() { using (var connection = new SqlConnection( "Data Source=.\\sqlexpress;Initial Catalog=master;" + "Integrated Security=true;User Instance=True;")) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = "CREATE DATABASE " + m_databaseFilename + " ON PRIMARY (NAME=" + m_databaseFilename + ", FILENAME='" + this.m_basePath + m_databaseFilename + ".mdf')"; command.ExecuteNonQuery(); } } } The database is created successfully. After that, I try to connect to the database to run the initialization script, by using the following code: private void ExecuteQueryFromFile(string filename) { string queryContent = File.ReadAllText(m_filePath + filename); this.m_connectionString = string.Format( @"Server=.\SQLExpress; Integrated Security=true;Initial Catalog={0};", m_databaseFilename); using (var connection = new SqlConnection(m_connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = queryContent; command.CommandTimeout = 0; command.ExecuteNonQuery(); } } } However, the connection.Open() statement fails, throwing the following exception: Cannot open database "TestData" requested by the login. The login failed. Login failed for user 'MYDOMAIN\myusername'. I am completely puzzled by this error because the account I am trying to connect with has sysadmin privileges, which should allow me to connect any database (notice that I use a connection to the master database to create the database in the first place).

    Read the article

  • How to organize and manage multiple database credentials in application?

    - by Polaris878
    Okay, so I'm designing a stand-alone web service (using RestLET as my framework). My application is divided in to 3 layers: Data Layer (just above the database, provides APIs for connecting to/querying database, and a database object) Object layer (responsible for serialization from the data layer... provides objects which the client layer can use without worrying about database) Client layer (This layer is the RestLET web service... basically just creates objects from the object layer and fulfills webservice request) Now, for each object I create in the object layer, I want to use different credentials (so I can sandbox each object...). The object layer should not know the exact credentials (IE the login/pw/DB URL etc). What would be the best way to manage this? I'm thinking that I should have a super class Database object in my data layer... and each subclass will contain the required log-in information... this way my object layer can just go Database db = new SubDatabase(); and then continue using that database. On the client level, they would just be able to go ItemCollection items = new ItemCollection(); and have no idea/control over the database that gets connected. I'm asking this because I am trying to make my platform extensible, so that others can easily create services off of my platform. If anyone has any experience with these architectural problems or how to manage this sort of thing I'd appreciate any insight or advice... Feel free to ask questions if this is confusing. Thanks! My platform is Java, the REST framework I'm using is RestLET, my database is MySQL.

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • PHP vs Phpmyadmin

    - by user330306
    Hi there, I've got this code which i execute on phpmyadmin which works 100% Create Temporary Table Searches ( id int, dt datetime); Create Temporary Table Searches1 ( id int, dt datetime, count int); insert into Searches(id, dt) select a.id, now() from tblSavedSearches a; insert into Searches1(id, dt, count) select b.savedSearchesId, (select c.dt from tblSavedSearchesDetails c where b.savedSearchesId = c.savedSearchesId order by c.dt desc limit 1) as 'dt', count(b.savedSearchesId) as 'cnt' from tblSavedSearchesDetails b group by b.savedSearchesId; insert into tblSavedSearchResults(savedSearchId,DtSearched,isEnabled) select id,now(),0 from Searches where not id in (select savedSearchId from tblSavedSearchResults); update tblSavedSearchResults inner join Searches1 on tblSavedSearchResults.savedSearchId = Searches1.id Set tblSavedSearchResults.DtSearched = Searches1.dt, tblSavedSearchResults.isEnabled = 1; However when i put the same code in php as below it generates an error $dba = DbConnect::CreateDbaInstance(); $query = ""; $query.="Create Temporary Table Searches ( id int, dt datetime); "; $query.="Create Temporary Table Searches1 ( id int, dt datetime, count int); "; $query.="insert into Searches(id, dt) select a.id, now() from tblSavedSearches a; "; $query.="insert into Searches1(id, dt, count) "; $query.="select "; $query.=" b.savedSearchesId, "; $query.=" (select c.dt from tblSavedSearchesDetails c where b.savedSearchesId = c.savedSearchesId order by c.dt desc limit 1) as 'dt', "; $query.=" count(b.savedSearchesId) as 'cnt' "; $query.="from tblSavedSearchesDetails b "; $query.="group by b.savedSearchesId; "; $query.="insert into tblSavedSearchResults(savedSearchId,DtSearched,isEnabled) "; $query.="select id,now(),0 from Searches where not id in (select savedSearchId from tblSavedSearchResults); "; $query.="update tblSavedSearchResults "; $query.="inner join Searches1 on tblSavedSearchResults.savedSearchId = Searches1.id "; $query.="Set tblSavedSearchResults.DtSearched = Searches1.dt, tblSavedSearchResults.isEnabled = 1; "; $dba->DbQuery($query) or die(mysql_error()); I get the following error You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Create Temporary Table Searches1 ( id int, dt datetime, count int) insert into S' at line 1 Please if someone could assist me with this ... Thanks

    Read the article

  • Query on MVVM pattern in WPF?

    - by Ashish Ashu
    I am implementing a MVVM pattern in my WPF application. My application main window is divided into four parts: Main Menu On the Top Outlook Navigation Control on the Left. A List View on the Middle. Another List view on the bottom. The Navigation control shows different setting (configuration) controls in the Tab items. All the four above are user controls which are placed in the main window. And corresponding to each user control there is separate view model which is bounded with a view model in the XAML of each control, however the model class remain the same between all the view model. And a MainWindow has a seperate View Model which is also bounded with a view model in the XAML of each control. Please help me out in framing a design in which each view models of all the controls above will interact with each other. Please let me know if my question is not clear to you!!

    Read the article

  • Sync database with filter using SyncOrchestrator with Sync Framework 2.0

    - by Flo
    Hi, I want to synchronize two SQL databases. But since one of the databases only requires a subset of the data I am looking for a filter option. Is there a possibility to add a Filter to the SyncOrchestrator or do I have to add the filter to the SyncProvider? According to this: http://social.microsoft.com/Forums/en-US/uklaunch2007ado.net/thread/35d4deb8-a861-4fe3-a395-d175e14c353f it is not possible to filter with the DbSyncProvider. Quote: "I understand your scenario, and the hebavior of the DbSyncProvider is due to the current limitation. DbSyncProvider is built on top of the Microsoft Sync Framework that can support filtering. Unfortunately, DbSyncProvider does not yet." But that post is quite old, maybe that has changed now. I am working with this example at the moment: http://msdn.microsoft.com/en-us/library/cc807255.aspx but I can't figure out how to add filtering here.

    Read the article

  • Cannot iterate of a collection of Anonymous Types created from a LINQ Query in VB.NET

    - by Atari2600
    Ok everyone, I must be missing something here. Every LINQ example I have seen for VB.NET anonymous types claims I can do something like this: Dim Info As EnumerableRowCollection = pDataSet.Tables(0).AsEnumerable Dim Infos = From a In Info _ Select New With {.Prop1 = a("Prop1"), .Prop2 = a("Prop2"), .Prop3 = a("Prop3") } Now when I go to iterate through the collection(see example below), I get an error that says "Name "x" is not declared. For Each x in Infos ... Next It's like VB.NET doesn't understand that Infos is a collection of anonymous types created by LINQ and wants me to declare "x" as some type. (Wouldn't this defeat the purpose of an anonymous type?) I have added the references to System.Data.Linq and System.Data.DataSetExtensions to my project. Here is what I am importing with the class: Imports System.Linq Imports System.Linq.Enumerable Imports System.Linq.Queryable Imports System.Data.Linq Any ideas?

    Read the article

  • Image map popup on rollover, with popup text drawn from database

    - by lbholland
    I have a custom map of the USA with about 20 polygonal hot spots. I would like a box to pop up next to each hot spot on hover, with text and links drawn from my DB specific to the location. I would have thought this is a common situation, but I can't find a solution that works. I tried using an asp:imagemap and an ajax popup extender but you can't assign IDs to hotspots and it doesn't support mouseover events. I tried css with an html image map but I can't figure out how to use css solutions with polygonal hot spots, and I also don't know how to link it to get the data from the db without asp targets (I'm not very familiar with jquery, which would work, I am guessing). Anyone know of any simple-ish solutions out there?

    Read the article

  • Complex SQL query help on aggregating values for nested subquery

    - by François Beausoleil
    Hi! I have people, companies, employees, events and event kinds. I'm making a report/followup sheet where people, companies and employees are the rows, and the columns are event kinds. Event kinds are simple values describing: "Promised Donation", "Received Donation", "Phoned", "Followed up" and such. Event kinds are ordered: CREATE TABLE event_kinds ( id, name, position); Events hold the actual reference to the event: CREATE TABLE events ( id, person_id, company_id, referrer_id, event_kind_id, created_at); referrer_id is another reference to people. It is the person which sent the information/tip along, and is an optional field, although I sometimes want to filter on an event_kind that has a specific referrer, while I don't for other event kinds. Notice I don't have an employee ID reference. The reference exists, but is implied. I have application code to validate that person_id and company_id really reference an employee record. The other tables are pretty basic: CREATE TABLE people ( id, name); CREATE TABLE companies ( id, name); CREATE TABLE employees ( id, person_id, company_id); I'm trying to achieve the following report: Referrer Phoned Promised Donated Francois Feb 16th Feb 20th Mar 1st Apple (Steve Jobs) Steve Ballmer Mar 3rd IBM Bill Gates Mar 7th The first row is a people record, the 2nd is an employee, and the 3rd is a company. If I asked for referrer Bill Gates for Phoned event kinds, I'd only see the 3rd row, while asking for Steve and Phoned would return no rows. Right now, I do 3 queries, one for companies, one for people and a last one for employees. I want the event kind columns to be ordered, but I do that in application code and show it properly there. Here's where I'm at so far: SELECT companies.id, companies.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "companies" SELECT people.id, people.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "people" SELECT employees.id, employees.company_id, employees.person_id, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "employees" I rather suspect I'm doing this wrong. There should be an "easier" way to do it. One other filter criteria would be to filter on people/company names: WHERE LOWER(companies.name) LIKE '%apple%'. Note that I'm ordering by the dates of event_kind_9 here, and a secondary sort is by person/company name. To summarize: I want to paginate the result set, find the latest event for each cell, order the result set by the date of the latest event, and by company/person name, filter by referrer in some event kinds, but not others. For reference, I'm using PostgreSQL, from Ruby, ActiveRecord/Rails. The solution is pure SQL though.

    Read the article

  • Query on MVVM design pattern on WPF.

    - by Ashish Ashu
    I am using MVVM architecture. I have a usercontrol UC as a View Model is a ModelData class ViewModel (UCViewModel) is binded to a usercontrol. I have three more usercontrols that is inside the usercontrol UC ( discussed above). Let's say uc1, uc2 and uc3. and the visibility of uc1 , uc2 and uc3 inside UC depends on the type selected ( which ever radio button is selected ). Since UC is binded to UCViewModel and I have to do all the stuff related to uc1 , uc2 and uc3 inside UCViewModel. Can I have seperate VM to uc1 , uc2 and uc3.. if Yes how can i do that ? Please Help!!

    Read the article

  • Merge aspnetdb database with another one?

    - by Xaisoft
    I have an aspnetdb and I have created another aspnetdb for another website, but instead of starting from scratch, I would like to import all the data from the one that has users and other data into the new aspnetdb. Is there a way to do this? Are any tools available?

    Read the article

  • ODBC Connection String Problem

    - by Brett
    Hi there, I am having major trouble connecting to my database via ODBC. The db is local (but I have a mirror on a virtual machine), so I am trying to use the connectionstring: Dsn=MonetDB;host=TARBELL where TARBELL is the name of my computer. However, it doesn't connect. BUT, this string does: Dsn=MonetDB;host=localhost as does Dsn=MonetDB Can anyone explain this? I am at a complete loss. I have taken down my firewalls (at least until I get this figured out), so that can't be the problem. I eventually want to change the TARBELL to the mirrored virtual machine running another instance of the database. Many thanks, Brett

    Read the article

  • SAAS architecture and salesforce database architecture

    - by Farax
    Hello all, I am architecting a software project and I want to create a SAAS (Software As a service) one. I want to model my application along the lines of Salesforce. I really like there customization features but I am not sure how they really go about it. I read that they create an ID for every field that is required and then store the corresponding data too. Can anyone guide me as to how this is possible. For example, if I want to store an employee record. 2 fields (firstname, lastname) are already given and the user adds a third field(say DOB), how is data going to be stored? I would also appreciate if someone could give me some resources to practical examples of implementing a SAAS architecture. Thanks

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >