Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 53/1330 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Creating a relative path to a Database in Asp.net for a library

    - by Greener
    In school I am part of a team of four working to create a GUI to translate the paper records of a made-up company and their functionality to a digital format. We're using an ASP.NET website for this purpose. Basically we use stored procedures and C# classes to represent the database. The folder we're using for the project contains the site and the libraries in separate folders. If I try to open the site from the folder containing both these elements the site will not run. I want to know if there is some way I can set up a relative path to the database in the Settings.Settings.cs file (or by some other means) of my libraries so I don't have to constantly change the database location for the connection string value every time we move the project. I suppose I should also mention that the database is in an App_Data folder.

    Read the article

  • Database Insider - November 2012 issue

    - by Javier Puerta
    The November issue of the Database Insider newsletter is now available. (Full newsletter here) Mark Hurd: Oracle Database Wrap-up from Oracle OpenWorld 2012 Oracle executives kicked off Oracle OpenWorld 2012, discussing the needs of customers, the brand-new Oracle Exadata Database Machine X3, and the latest Oracle Database innovations. (Read More) Webcast: Introduction to Oracle Exadata Database Machine X3 Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow. Webcast: SAP Applications Run Better on Oracle Exadata Find out why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. 

    Read the article

  • Design Advice Needed For Synonyms Database

    - by James J
    I'm planning to put together a database that can be used to query synonyms of words. The database will end up huge, so the idea is to keep things running fast. I've been thinking about how to do this, but my database design skills are not up to scratch these days. My initial idea was to have each word stored in one table, and then another table with a 1 to many relationship where each word can be linked to another word and that table can be queried. The application I'm developing allows users to highlight a word, and then type in, or select some synonyms from the database for that word. The application learns from the user input so if someone highlights "car" and types in "motor" the database would be updated to link the relationship if it don't exist already. What I don't want to happen is for a user to type in the word "shop" and link it to the word car. So I'm thinking I will need to add some sort of weight to each relationship. Eventually the synonyms the users enter will be used so they can auto select common synonyms used with a certain word. The lower weight words will not be displayed so shop could never be a synonym of car unless it had a very high weight, and chances are nobody is going to do that. Does the above sound right? Can you offer any suggestions or improvements?

    Read the article

  • Export from a standalone database to an embedded database.

    - by jdana
    I have a two-part application, where there is a central database that is edited, and then at certain times, the data is released and distributed as its own application. I would like to use a standalone database for the central database (MySQL, Postgres, Oracle, SQL Server, etc.) and then have a reliable export to an embedded database (probably SQLite) for distribution. What tools/processes are available for such an export, or is it a practice to be avoided? EDIT: A couple of additional pieces of information. The distributed application should be able to run without having to connect to another server (ex: your spellchecker still works even you don't have internet), and I don't want to install a full DB server for read-only access to the data.

    Read the article

  • Updating access 2000 database through code in VB6

    - by Mark
    I have an application that uses an access 2000 database currently in distribution. I need to update one of the recordsets with additional fields on my customer's computers. My data controls work fine as I have them set to connect in access 2000 format but when I try to open the database in code, I get an unrecognized data format error. What is the best way to replace or add to the database on their machines? Any help is greatly appreciated.

    Read the article

  • C#, Manage concurrency in database access

    - by Goul
    Hi there, I have written a while ago an application used by multiple users to handle trades creation. I haven't done development for some time now and can't remember how I managed the concurrency between the users and so would have liked your advices in term of design. The application was as follow: - One heavy client per user - A single database - Access to the database for each user to insert/update/delete trades - A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal. My questions: 1- Do you confirm I shouldn't care about the connection to the database for each application. Considering that there is a singleton in each, I would expect on connexion per client with no issue. 2- How preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to. 3- How to have the grid automatically updated whenever the database is (by another user for example)? Thank you in advance for your help!

    Read the article

  • ???? Oracle11g ????????? No.2 - v$database.CURRENT_SCN

    - by Todd Bao
    «????Oracle 11g ???????»???????????,?11.2.0.3.0?????: select current_scn from v$database union all select current_scn from v$database; ??????????SCN,??????11.2.0.1.0???????????SCN?????? ??,????11.2.0.1.0????,11.2.0.3.0????X$KCCDI(V$DATABASE?????,??CURRENT_SCN??)??,?????????SCN? ----------------------------------------------------| Id  | Operation            | Name               |----------------------------------------------------|   0 | SELECT STATEMENT     |                    ||   1 |  MERGE JOIN CARTESIAN|                    ||*  2 |   FIXED TABLE FULL   | X$KCCDI            ||   3 |   BUFFER SORT        |                    ||   4 |    VIEW              | VW_JF_SET$6E0AEE5B ||   5 |     UNION-ALL        |                    ||   6 |      FIXED TABLE FULL| X$KCCDI2           ||   7 |      FIXED TABLE FULL| X$KCCDI2           |---------------------------------------------------- ??????11.2.0.3.0???????SQL??v$database????current_scn????????:???????X$KCCDI???dicur_scn(current_scn)??????? a. ???:????union all,???????,??????????X$KCCDI2(V$DATABASE??????)?VIEW????,??X$KCCDI?X$KCCDI2????,???X$KCCDI??,??: SYS@fmw//Scripts> run  1  select current_scn from v$database  2  union all select current_scn from v$database  3  union all select current_scn from v$database  4* union all select current_scn from v$databaseCURRENT_SCN-----------    5074384    5074385    5074385    50743854 rows selected. ??,X$KCCDI?????????,??????????SCN??????SCN????????“?”SCN? b. ???:???????,??: SYS@fmw//Scripts> run  1  select current_scn,status from v$database,v$instance  2  union all  3* select current_scn,status from v$database,v$instanceCURRENT_SCN + STATUS----------- + ------------------------    5075463 + OPEN    5075464 + OPEN2 rows selected. c. ???:?????????: SYS@fmw//Scripts> run  1* select a.current_scn,b.current_scn from v$database a,v$database bCURRENT_SCN + CURRENT_SCN----------- + -----------    5078328 +     50783291 row selected. ????UNION ALL?????? d. ??,???X$KCCDI??????????????????“??”??=D,????????X$?????????$???,???????,????V$DATABASE?????????????????: SYS@fmw//Scripts> run  1  select dicur_scn from x$kccdi  2* union all select dicur_scn from x$kccdiDICUR_SCN--------------------------------508218350821842 rows selected. SYS@fmw//Scripts> run  1* select a.dicur_scn,b.dicur_scn from x$kccdi a,x$kccdi bDICUR_SCN                        + DICUR_SCN-------------------------------- + --------------------------------5082913                          + 50829141 row selected. ??? Todd Bao ??,???????????,?????????SCN,????V$DATABASE.CURRENT_SCN?,???????“next scn”? ×??,???demo????11.2.0.3.???

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • postgreSQL vs Cassandra vs MongoDB vs Voldemart ?

    - by ramonrails
    Which database to decide upon? Any comparisions? Existing: postgresql Issues Not easily scalable horizontal. Needs sharding etc Clustering does not solve the data growth problem Looking for: Any database that is easily horizontally scalable Cassandra (Twitter uses that?) MongoDB (rapidly gaining popularity) Voldemart Other? Why? Data growing with snowball effect existing postgresql locks table etc for vaccuum tasks periodically Archiving data is tideous currently Human interaction involved in existing archive, vaccuum, ... process periodically Need a 'set it. forget it. just add another server when data grows more.' type of solution

    Read the article

  • What file format/database format does Picasa use?

    - by Raymond
    I am trying to figure out what file format the .db file and .pmp files are. I tried using db_dump (Berkeley DB) for the .db files, but it seems that they are not Berkeley DB, or of an older version. I have no idea what the .PMP files are. Directory of C:\Users\me\AppData\Local\Google\Picasa2\db3 6/09/2010 08:07 PM 303,748 imagedata_uid64.pmp 1/18/2010 10:34 PM 4,885 imagedata_unification_lhlist.pmp 6/09/2010 10:55 PM 155,752 imagedata_width.pmp 6/09/2010 10:55 PM 1,286,346,614 previews_0.db 6/10/2010 10:06 AM 467,168 previews_index.db Any help appreciated.

    Read the article

  • Daemon/Software that takes changes from sql database and applies them to unix config files

    - by Dude Man
    I was wondering if there was a unix daemon available that would be capable of something like this: admin adds an IP entry to a DB; daemon finds change after wait interval and manipulates ifconfig/config files I was thinking maybe there is a plugin for cfengine that might be able to do this, but I couldn't find any. I mean this would be a fairly easy thing to script up in perl, but why re-invent the wheel if theres already something out there that is better than what my limited programming abilities can make. Lastly, if it worked on FreeBSD that'd be great.

    Read the article

  • MySQL Database Replication and Server Load

    - by Willy
    Hi Everyone, I have an online service with around 5000 MySQL databases. Now, I am interested in building a development area of the exactly same environment in my office, therefore, I am about to setup MySQL replication between my live MySQL server and development MySQL server. But my concern is the load which will occur on my live MySQL server once replication is started. Do you have any experience? Will this process cause extra load on my production server? Thanks, have a nice weekend.

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • In Visio 2010, how can I create a mandatory, non-identifying relationship between two database tables

    - by Cam Jackson
    I'm working in MS Visio 2010. This is the relevant part of my ERD: The relationship between Event and Adventure is correct: there's a foreign key from Event to Adventure, and that FK is part of Event's primary key. However, what I can't figure out is how to make the relationship line from Adventure to AccomodationType be the same, without making that relationship part of the PK of adventure. When I look at the 'Miscellaneous' properties of that relationship line, I want it to be: Cardinality: Zero or more Relationship type: Non-identifying Child has parent: Not optional (mandatory) But the checkbox for the third property is greyed out, and toggles between True/False as I make the relationship Non-identifying/Identifying. The only way I could figure out was to disconnect the two columns, from the 'Definition' tab, which then un-grey's the 'Optional' checkbox, but then I lose the foreign key property on the accomType column, and while the relationship symbols are correct, the line remains dotted. Any ideas, anyone?

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • simple script to backup PostgreSQL database

    - by Mick
    Hello I write simple batch script to backup postgeSQL databases, but I find one strange problem whether the pg_dump command can specify a password? There is batch script: REM script to backup PostgresSQL databases @ECHO off FOR /f "tokens=1-4 delims=/ " %%i IN ("%date%") DO ( SET dow=%%i SET month=%%j SET day=%%k SET year=%%l ) SET datestr=%month%_%day%_%year% SET db1=opennms SET db2=postgres SET db3=sr_preproduction REM SET db4=sr_production ECHO datestr is %datestr% SET BACKUP_FILE1=D:\%db1%_%datestr%.sql SET FIlLENAME1=%db1%_%datestr%.sql SET BACKUP_FILE2=D:\%db2%_%datestr%.sql SET FIlLENAME2=%db2%_%datestr%.sql SET BACKUP_FILE3=D:\%db3%_%datestr%.sql SET FIlLENAME3=%db3%_%datestr%.sql SET BACKUP_FILE4=D:\%db14%_%datestr%.sql SET FIlLENAME4=%db4%_%datestr%.sql ECHO Backup file name is %FIlLENAME1% , %FIlLENAME2% , %FIlLENAME3% , %FIlLENAME4% ECHO off pg_dump -U postgres -h localhost -p 5432 %db1% > %BACKUP_FILE1% pg_dump -U postgres -h localhost -p 5432 %db2% > %BACKUP_FILE2% pg_dump -U postgres -h localhost -p 5432 %db3% > %BACKUP_FILE3% REM pg_dump -U postgres -h localhost -p 5432 %db4% > %BACKUP_FILE4% ECHO DONE ! Please give me advice Regards Mick

    Read the article

  • How does MySQL 5.5 and InnoDB on Linux use RAM?

    - by Loren
    Does MySQL 5.5 InnoDB keep indexes in memory and tables on disk? Does it ever do it's own in-memory caching of part or whole tables? Or does it completely rely on the OS page cache (I'm guessing that it does since Facebook's SSD cache that was built for MySQL was done at the OS-level: https://github.com/facebook/flashcache/)? Does Linux by default use all of the available RAM for the page cache? So if RAM size exceeds table size + memory used by processes, then when MySQL server starts and reads the whole table for the first time it will be from disk, and from that point on the whole table is in RAM? So using Alchemy Database (SQL on top of Redis, everything always in RAM: http://code.google.com/p/alchemydatabase/) shouldn't be much faster than MySQL, given the same size RAM and database?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >