Search Results

Search found 30412 results on 1217 pages for 'spatial database'.

Page 44/1217 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • C#, Manage concurrency in database access

    - by Goul
    Hi there, I have written a while ago an application used by multiple users to handle trades creation. I haven't done development for some time now and can't remember how I managed the concurrency between the users and so would have liked your advices in term of design. The application was as follow: - One heavy client per user - A single database - Access to the database for each user to insert/update/delete trades - A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal. My questions: 1- Do you confirm I shouldn't care about the connection to the database for each application. Considering that there is a singleton in each, I would expect on connexion per client with no issue. 2- How preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to. 3- How to have the grid automatically updated whenever the database is (by another user for example)? Thank you in advance for your help!

    Read the article

  • ???? Oracle11g ????????? No.2 - v$database.CURRENT_SCN

    - by Todd Bao
    «????Oracle 11g ???????»???????????,?11.2.0.3.0?????: select current_scn from v$database union all select current_scn from v$database; ??????????SCN,??????11.2.0.1.0???????????SCN?????? ??,????11.2.0.1.0????,11.2.0.3.0????X$KCCDI(V$DATABASE?????,??CURRENT_SCN??)??,?????????SCN? ----------------------------------------------------| Id  | Operation            | Name               |----------------------------------------------------|   0 | SELECT STATEMENT     |                    ||   1 |  MERGE JOIN CARTESIAN|                    ||*  2 |   FIXED TABLE FULL   | X$KCCDI            ||   3 |   BUFFER SORT        |                    ||   4 |    VIEW              | VW_JF_SET$6E0AEE5B ||   5 |     UNION-ALL        |                    ||   6 |      FIXED TABLE FULL| X$KCCDI2           ||   7 |      FIXED TABLE FULL| X$KCCDI2           |---------------------------------------------------- ??????11.2.0.3.0???????SQL??v$database????current_scn????????:???????X$KCCDI???dicur_scn(current_scn)??????? a. ???:????union all,???????,??????????X$KCCDI2(V$DATABASE??????)?VIEW????,??X$KCCDI?X$KCCDI2????,???X$KCCDI??,??: SYS@fmw//Scripts> run  1  select current_scn from v$database  2  union all select current_scn from v$database  3  union all select current_scn from v$database  4* union all select current_scn from v$databaseCURRENT_SCN-----------    5074384    5074385    5074385    50743854 rows selected. ??,X$KCCDI?????????,??????????SCN??????SCN????????“?”SCN? b. ???:???????,??: SYS@fmw//Scripts> run  1  select current_scn,status from v$database,v$instance  2  union all  3* select current_scn,status from v$database,v$instanceCURRENT_SCN + STATUS----------- + ------------------------    5075463 + OPEN    5075464 + OPEN2 rows selected. c. ???:?????????: SYS@fmw//Scripts> run  1* select a.current_scn,b.current_scn from v$database a,v$database bCURRENT_SCN + CURRENT_SCN----------- + -----------    5078328 +     50783291 row selected. ????UNION ALL?????? d. ??,???X$KCCDI??????????????????“??”??=D,????????X$?????????$???,???????,????V$DATABASE?????????????????: SYS@fmw//Scripts> run  1  select dicur_scn from x$kccdi  2* union all select dicur_scn from x$kccdiDICUR_SCN--------------------------------508218350821842 rows selected. SYS@fmw//Scripts> run  1* select a.dicur_scn,b.dicur_scn from x$kccdi a,x$kccdi bDICUR_SCN                        + DICUR_SCN-------------------------------- + --------------------------------5082913                          + 50829141 row selected. ??? Todd Bao ??,???????????,?????????SCN,????V$DATABASE.CURRENT_SCN?,???????“next scn”? ×??,???demo????11.2.0.3.???

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • postgreSQL vs Cassandra vs MongoDB vs Voldemart ?

    - by ramonrails
    Which database to decide upon? Any comparisions? Existing: postgresql Issues Not easily scalable horizontal. Needs sharding etc Clustering does not solve the data growth problem Looking for: Any database that is easily horizontally scalable Cassandra (Twitter uses that?) MongoDB (rapidly gaining popularity) Voldemart Other? Why? Data growing with snowball effect existing postgresql locks table etc for vaccuum tasks periodically Archiving data is tideous currently Human interaction involved in existing archive, vaccuum, ... process periodically Need a 'set it. forget it. just add another server when data grows more.' type of solution

    Read the article

  • What file format/database format does Picasa use?

    - by Raymond
    I am trying to figure out what file format the .db file and .pmp files are. I tried using db_dump (Berkeley DB) for the .db files, but it seems that they are not Berkeley DB, or of an older version. I have no idea what the .PMP files are. Directory of C:\Users\me\AppData\Local\Google\Picasa2\db3 6/09/2010 08:07 PM 303,748 imagedata_uid64.pmp 1/18/2010 10:34 PM 4,885 imagedata_unification_lhlist.pmp 6/09/2010 10:55 PM 155,752 imagedata_width.pmp 6/09/2010 10:55 PM 1,286,346,614 previews_0.db 6/10/2010 10:06 AM 467,168 previews_index.db Any help appreciated.

    Read the article

  • Daemon/Software that takes changes from sql database and applies them to unix config files

    - by Dude Man
    I was wondering if there was a unix daemon available that would be capable of something like this: admin adds an IP entry to a DB; daemon finds change after wait interval and manipulates ifconfig/config files I was thinking maybe there is a plugin for cfengine that might be able to do this, but I couldn't find any. I mean this would be a fairly easy thing to script up in perl, but why re-invent the wheel if theres already something out there that is better than what my limited programming abilities can make. Lastly, if it worked on FreeBSD that'd be great.

    Read the article

  • MySQL Database Replication and Server Load

    - by Willy
    Hi Everyone, I have an online service with around 5000 MySQL databases. Now, I am interested in building a development area of the exactly same environment in my office, therefore, I am about to setup MySQL replication between my live MySQL server and development MySQL server. But my concern is the load which will occur on my live MySQL server once replication is started. Do you have any experience? Will this process cause extra load on my production server? Thanks, have a nice weekend.

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • In Visio 2010, how can I create a mandatory, non-identifying relationship between two database tables

    - by Cam Jackson
    I'm working in MS Visio 2010. This is the relevant part of my ERD: The relationship between Event and Adventure is correct: there's a foreign key from Event to Adventure, and that FK is part of Event's primary key. However, what I can't figure out is how to make the relationship line from Adventure to AccomodationType be the same, without making that relationship part of the PK of adventure. When I look at the 'Miscellaneous' properties of that relationship line, I want it to be: Cardinality: Zero or more Relationship type: Non-identifying Child has parent: Not optional (mandatory) But the checkbox for the third property is greyed out, and toggles between True/False as I make the relationship Non-identifying/Identifying. The only way I could figure out was to disconnect the two columns, from the 'Definition' tab, which then un-grey's the 'Optional' checkbox, but then I lose the foreign key property on the accomType column, and while the relationship symbols are correct, the line remains dotted. Any ideas, anyone?

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • simple script to backup PostgreSQL database

    - by Mick
    Hello I write simple batch script to backup postgeSQL databases, but I find one strange problem whether the pg_dump command can specify a password? There is batch script: REM script to backup PostgresSQL databases @ECHO off FOR /f "tokens=1-4 delims=/ " %%i IN ("%date%") DO ( SET dow=%%i SET month=%%j SET day=%%k SET year=%%l ) SET datestr=%month%_%day%_%year% SET db1=opennms SET db2=postgres SET db3=sr_preproduction REM SET db4=sr_production ECHO datestr is %datestr% SET BACKUP_FILE1=D:\%db1%_%datestr%.sql SET FIlLENAME1=%db1%_%datestr%.sql SET BACKUP_FILE2=D:\%db2%_%datestr%.sql SET FIlLENAME2=%db2%_%datestr%.sql SET BACKUP_FILE3=D:\%db3%_%datestr%.sql SET FIlLENAME3=%db3%_%datestr%.sql SET BACKUP_FILE4=D:\%db14%_%datestr%.sql SET FIlLENAME4=%db4%_%datestr%.sql ECHO Backup file name is %FIlLENAME1% , %FIlLENAME2% , %FIlLENAME3% , %FIlLENAME4% ECHO off pg_dump -U postgres -h localhost -p 5432 %db1% > %BACKUP_FILE1% pg_dump -U postgres -h localhost -p 5432 %db2% > %BACKUP_FILE2% pg_dump -U postgres -h localhost -p 5432 %db3% > %BACKUP_FILE3% REM pg_dump -U postgres -h localhost -p 5432 %db4% > %BACKUP_FILE4% ECHO DONE ! Please give me advice Regards Mick

    Read the article

  • How does MySQL 5.5 and InnoDB on Linux use RAM?

    - by Loren
    Does MySQL 5.5 InnoDB keep indexes in memory and tables on disk? Does it ever do it's own in-memory caching of part or whole tables? Or does it completely rely on the OS page cache (I'm guessing that it does since Facebook's SSD cache that was built for MySQL was done at the OS-level: https://github.com/facebook/flashcache/)? Does Linux by default use all of the available RAM for the page cache? So if RAM size exceeds table size + memory used by processes, then when MySQL server starts and reads the whole table for the first time it will be from disk, and from that point on the whole table is in RAM? So using Alchemy Database (SQL on top of Redis, everything always in RAM: http://code.google.com/p/alchemydatabase/) shouldn't be much faster than MySQL, given the same size RAM and database?

    Read the article

  • SQL Server database on an external hard disk drive

    - by Achilles
    Due to some security problems, My boss has asked me to store all sensitive data in external/removable storages like USB stick or external HDD and this specially includes the MDF/NDF/LDF files of SQL Server 2008 we're running. I've been reading for these last three days with no luck to find a solution. Is there any solution at all? Has ever anybody done such thing?

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Error when mount the database in exchange 2010 SP1

    - by user64060
    Hi, My company have two exchange 2010 SP1 servers with DAG configuration with OS widows server 2008 R2 in testing entironment. Today i want to test my backup possibility, so i restore the backup data to another location not original location. I dismount the database and then delete the all files under the database location. last I copy back the files from back up location to database location. When i want to mount the database. It will come out the below error! -------------------------------------------------------- Microsoft Exchange Error -------------------------------------------------------- Failed to mount database 'mail2'. mail2FailedError: Couldn't mount the database that you specified. Specified database: mail2; Error code: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn]. An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn] An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Server: mail2.e0594.cn] MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) Any suggestion? Thanks!

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Database implementation question? [closed]

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Limiting database security

    - by Torbal
    A number of texts signify that the most important aspects offered by a DBMS are availability, integrity and secrecy. As part of a homework assignment I have been tasked with mentioning attacks which would affect each aspect. This is what I have come up with - are they any good? Availability - DDOS attack Integrity Secrecy - SQL Injection attack Integrity - Use of trojans to gain access to objects with higher security roles

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >