Search Results

Search found 31022 results on 1241 pages for 'compact database'.

Page 48/1241 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • In Visio 2010, how can I create a mandatory, non-identifying relationship between two database tables

    - by Cam Jackson
    I'm working in MS Visio 2010. This is the relevant part of my ERD: The relationship between Event and Adventure is correct: there's a foreign key from Event to Adventure, and that FK is part of Event's primary key. However, what I can't figure out is how to make the relationship line from Adventure to AccomodationType be the same, without making that relationship part of the PK of adventure. When I look at the 'Miscellaneous' properties of that relationship line, I want it to be: Cardinality: Zero or more Relationship type: Non-identifying Child has parent: Not optional (mandatory) But the checkbox for the third property is greyed out, and toggles between True/False as I make the relationship Non-identifying/Identifying. The only way I could figure out was to disconnect the two columns, from the 'Definition' tab, which then un-grey's the 'Optional' checkbox, but then I lose the foreign key property on the accomType column, and while the relationship symbols are correct, the line remains dotted. Any ideas, anyone?

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • simple script to backup PostgreSQL database

    - by Mick
    Hello I write simple batch script to backup postgeSQL databases, but I find one strange problem whether the pg_dump command can specify a password? There is batch script: REM script to backup PostgresSQL databases @ECHO off FOR /f "tokens=1-4 delims=/ " %%i IN ("%date%") DO ( SET dow=%%i SET month=%%j SET day=%%k SET year=%%l ) SET datestr=%month%_%day%_%year% SET db1=opennms SET db2=postgres SET db3=sr_preproduction REM SET db4=sr_production ECHO datestr is %datestr% SET BACKUP_FILE1=D:\%db1%_%datestr%.sql SET FIlLENAME1=%db1%_%datestr%.sql SET BACKUP_FILE2=D:\%db2%_%datestr%.sql SET FIlLENAME2=%db2%_%datestr%.sql SET BACKUP_FILE3=D:\%db3%_%datestr%.sql SET FIlLENAME3=%db3%_%datestr%.sql SET BACKUP_FILE4=D:\%db14%_%datestr%.sql SET FIlLENAME4=%db4%_%datestr%.sql ECHO Backup file name is %FIlLENAME1% , %FIlLENAME2% , %FIlLENAME3% , %FIlLENAME4% ECHO off pg_dump -U postgres -h localhost -p 5432 %db1% > %BACKUP_FILE1% pg_dump -U postgres -h localhost -p 5432 %db2% > %BACKUP_FILE2% pg_dump -U postgres -h localhost -p 5432 %db3% > %BACKUP_FILE3% REM pg_dump -U postgres -h localhost -p 5432 %db4% > %BACKUP_FILE4% ECHO DONE ! Please give me advice Regards Mick

    Read the article

  • How does MySQL 5.5 and InnoDB on Linux use RAM?

    - by Loren
    Does MySQL 5.5 InnoDB keep indexes in memory and tables on disk? Does it ever do it's own in-memory caching of part or whole tables? Or does it completely rely on the OS page cache (I'm guessing that it does since Facebook's SSD cache that was built for MySQL was done at the OS-level: https://github.com/facebook/flashcache/)? Does Linux by default use all of the available RAM for the page cache? So if RAM size exceeds table size + memory used by processes, then when MySQL server starts and reads the whole table for the first time it will be from disk, and from that point on the whole table is in RAM? So using Alchemy Database (SQL on top of Redis, everything always in RAM: http://code.google.com/p/alchemydatabase/) shouldn't be much faster than MySQL, given the same size RAM and database?

    Read the article

  • SQL Server database on an external hard disk drive

    - by Achilles
    Due to some security problems, My boss has asked me to store all sensitive data in external/removable storages like USB stick or external HDD and this specially includes the MDF/NDF/LDF files of SQL Server 2008 we're running. I've been reading for these last three days with no luck to find a solution. Is there any solution at all? Has ever anybody done such thing?

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Error when mount the database in exchange 2010 SP1

    - by user64060
    Hi, My company have two exchange 2010 SP1 servers with DAG configuration with OS widows server 2008 R2 in testing entironment. Today i want to test my backup possibility, so i restore the backup data to another location not original location. I dismount the database and then delete the all files under the database location. last I copy back the files from back up location to database location. When i want to mount the database. It will come out the below error! -------------------------------------------------------- Microsoft Exchange Error -------------------------------------------------------- Failed to mount database 'mail2'. mail2FailedError: Couldn't mount the database that you specified. Specified database: mail2; Error code: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn]. An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn] An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Server: mail2.e0594.cn] MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) Any suggestion? Thanks!

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Database implementation question? [closed]

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Limiting database security

    - by Torbal
    A number of texts signify that the most important aspects offered by a DBMS are availability, integrity and secrecy. As part of a homework assignment I have been tasked with mentioning attacks which would affect each aspect. This is what I have come up with - are they any good? Availability - DDOS attack Integrity Secrecy - SQL Injection attack Integrity - Use of trojans to gain access to objects with higher security roles

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • SQL SERVER 2008 Introduction to Snapshot Database Restore From Snapshot

    Snapshot database is one of the most interesting concepts that I have used at some places recently.Here is a quick definition of the subject from Book On Line:A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • Oracle OpenWorld 2012: Focus On Database Security

    - by Troy Kitch
    Oracle OpenWorld 2012 is going to be the place to learn about Oracle Database Security solutions including Oracle Advanced Security with transparent data encryption, Database Vault, Audit Vault and Database Firewall, Label Security, and more. We've put together this Focus On Database Security document so you'll know when and where to attend the key database security sessions, and not miss a thing. 

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • New White Paper about Upgrade to Oracle Database 12c

    - by Mike Dietrich
    With the release of Oracle Database 12c many new collateral will be available right now including our new White Paper: White Paper:Upgrading to Oracle Database 12c This white paper outlines the methods available for you to upgrade and migrate your database to Oracle Database 12c.  Learn about different use cases and key factors to consider when choosing the method that best fits your requirements. And if you'd like to have a look into the new Oracle 12c documentation please find it here: Oracle Database 12c Documentation -Mike

    Read the article

  • Új adatbázis-biztonsági termék: Audit Vault and Database Firewall, lényegesen olcsóbban

    - by user645740
    Az Oracle összevonta az Audit Vault és a Database Firewall termékeket, még szélesebb felhasználói körnek elérhetové téve az adatbázisok biztonságának magasabb szintjét. Az új termék, az Oracle Audit Vault and Database Firewall (AVDF) mostantól kedvezobb áron érheto el. A jelentések megtekintéséhez restricted use-ban tartalmazza  a Business Intelligence Publisher licencet. Az adatgyujto, management szerver komponensek kiemelten védettek, az Audit Vault Server és a Database Firewall szerverekre restricted use-ban használhatók:Oracle Database Enterprise Edition, Database Vault, Partitioning, Advanced Compression és Advanced Security.

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • OpenWorld - Database Security Demonstrations in Moscone South Left

    - by Troy Kitch
    All this week, Oracle security experts will be giving live product demos of Oracle Database Security solutions in Moscone South Left, in the Oracle DEMOgrounds for "database." Demonstrations include Oracle Database Defense-in-Depth Security, Database Application Data Redaction, Transparent Data Encryption, Oracle Audit Vault and Database Firewall, Data Masking and Data Subsetting. Don't miss it!

    Read the article

  • Simple Steps to Prepare Mirror Database for Mirroring in SQL Server

    To prepare a database for mirroring, you need to perform the following steps: Script the restore of the latest full database backup, script the restore of every transaction log backup that has been made after that full database backup, copy the full database backup and transaction log backups to the mirror server, and run the restore scripts on the mirror server. In this tip I will walk through these steps and provide sample scripts to prepare a database for mirroring.

    Read the article

  • Different Ways to Restore a SQL Server Database

    This article describes the SQL Server database restore principles for full backups, differential backups and transaction log backups and how to perform the restores to get to a particular point in time. This tip describes SQL Server database restore principles on a database that is using the FULL recovery model. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • Does anyone have database, programming language/framework suggestions for a GUI point of sale system

    - by Jason Down
    Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc. After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things: Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder). Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them. Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling. Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes. So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation. To summarize we are looking to do the following: Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.) Deliver a solution that is easily maintainable and supportable. A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC). Suggestions would be appreciated. Thanks [UPDATE] I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >