Search Results

Search found 107092 results on 4284 pages for 'advantage database server'.

Page 42/4284 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Letöltheto az Oracle Database Firewall 5.0

    - by Lajos Sárecz
    2010 május 20-án jelentettük be, hogy megvettük az adatbázis tuzfal megoldást fejleszto Secerno céget. Azóta viszonylag keveset lehetett hallani errol a termékrol, idehaza egyedül az oszi ITBN konferencián tartott róla eloadást Stuart Sharp szuk fél órában. Ráadásul a felvásárlás óta a terméket sem lehetett megvásárolni, hiszen a merge után folyó fejlesztések még nem voltak készen. Január 11. óta azonban letötlheto az Oracle Database Firewall 5.0 telepítoje az Oracle edelivery oldaláról az Oracle Database Product Pack-en belül Linux x86 platformra. A Database Firewall az adatbázis védelem elso vonalának tekintheto. Valós idoben monitorozza az adatbázis aktivitását a hálózaton. SQL nyelvi elemzojével rendkívül pontosan képes detektálni a külso és belso támadásokat, a jogosultatlanul, támadó szándékkal végrehajtott tranzakciókat. Az SQL nyelvi elemzojének kifinomultsága lehetové teszi a szurés közel 100%-os pontosságát és megbízhatóságát, ami azért rendkívül fontos, mert nem elég minden támadó tranzakciót kiszurni, de fontos hogy a normál üzletmenetnek megfelelo tranzakciók közül egyet se szurjön, hiszen az is komoly üzleti károkat okozhat. Az adatbázis tuzfalról több részletet tudhat meg mindenki, aki regisztrál és ellátogat a január 27-i Oracle Security Summit rendezvényünkre, ahol a tervek szerint ismét Stuart Sharp tart majd eloadást, viszont ezúttal 1 órában sokkal több részletet tud megosztani a magyar ügyfelekkel és partnerekkel. A Database Firewall eloadást megelozoen egyébként én tartok egy kb. félórás áttekintést az Oracle Database biztonsági megoldásairól.

    Read the article

  • Windows Server 2003 R2 SP2 can't access the internet

    - by Ishmael
    Our 64-bit windows server can no longer access the internet although it is a web-server that is hosting webpages accessible by the internet. Since this dilemma the server can no longer receive windows updates or Symantec virus definitions. We have checked our firewalls so nothing is blocked. Any insight on why this issue happened would be appreciated. Also the server is running IE6 which we have been trying to update.

    Read the article

  • MAA a Database Machine-nel, maximális rendelkezésre állás

    - by Fekete Zoltán
    Néhány napja jelent meg egy, a maximális rendelkezésre állást boncolgató Oracle fehérpapír :): Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine. Ez a dokumentum az Exadata környezetben az Oracle Data Guard használatát elemzi. Az utolsó oldalakon néhány rendkívül hasznos linket is találunk. Mire is használható a Data Guard? - katasztrófa helyzet kezelése - adatbázis gördülo upgrade - egy megoldás az Exadata környezetre migrálásra - a standby adatbázis kihasználása A Sun Oracle Database Machine háromféle konfigurációban kapható: Full Rack, Half Rack és Quarter Rack, azaz teljes, fél és negyed szekrény kiépítésben. Felfelé upgrade-elheto és akár sok Full Rack összekapcsolva is egyetlen gépként tud muködni. A határ tehát a csillagos ég! :) Hiszen a nap a legfontosabb csillagunk. A Database Machine már önmagában is magas rendelkezésreállást biztosít, hiszen minden - a muködés szempontjából fontos - minden komponense legalább duplikált! Természetesen ez az adatokra is vonatkozik. A Database Machine ideális gyors környezet mind OLTP, mind DW futtatására, mind adatbázis konszolidációra. A tranzakciós (OLTP) rendszereknél régóta fontos követelmény, hogy az elsodleges site mögött legyen egy katasztrófa site, mely át tudja venni az adatbázis-kezelés feladatát, ha árvíz, tuz, vagy más szomorú katasztrófa történne az elsodleges site-on. Manapság már az adattárházak (DW) üzemeltetésében is fontos szerepet kap az MAA architektúra, azaz a Maximum Availability Architecture. Innen letöltheto a pdf: Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine.

    Read the article

  • Passing data from one database to another database table (Access) (C#)

    - by SAMIR BHOGAYTA
    string conString = "Provider=Microsoft.Jet.OLEDB.4.0 ;Data Source=Backup.mdb;Jet OLEDB:Database Password=12345"; OleDbConnection dbconn = new OleDbConnection(); OleDbDataAdapter dAdapter = new OleDbDataAdapter(); OleDbCommand dbcommand = new OleDbCommand(); try { if (dbconn.State == ConnectionState.Closed) dbconn.Open(); string selQuery = "INSERT INTO [Master] SELECT * FROM [MS Access;DATABASE="+ "\\Data.mdb" + ";].[Master]"; dbcommand.CommandText = selQuery; dbcommand.CommandType = CommandType.Text; dbcommand.Connection = dbconn; int result = dbcommand.ExecuteNonQuery(); } catch(Exception ex) {}

    Read the article

  • Minimal set of critical database operations

    - by Juan Carlos Coto
    In designing the data layer code for an application, I'm trying to determine if there is a minimal set of database operations (both single and combined) that are essential for proper application function (i.e. the database is left in an expected state after every data access call). Is there a way to determine the minimal set of database operations (functions, transactions, etc.) that are critical for an application to function correctly? How do I find it? Thanks very much!

    Read the article

  • Improving terminal server performance for a specfic app

    - by Matt
    We have a windows 2003 terminal server running 2X application load balancign that is hosting a client's application that is accessed by around 50 users. Each user has there own database. The database is a file based database. The application is developed under Delphi so I think the database may be BDE based. As you can imagine, there is probably quite a lot of disk i/o. Here are some of the perfmon settings. Logged in users (average) 20 - 25 CPU Utilization (average) 80 - 100% Disk Queue Length (average) 1.6 % Disk time (average) 111 Page faults/sec (average) 1400 The application takes on average about a minute to load up. As usual, the budget is tight. Is there basic windows performance tuning tips that people can recommend to improve things before we fork out on more RAM etc. Server is a 2.8GHz Xeon with 3GB of RAM.

    Read the article

  • Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

    - by alejandro.vargas
    Recently I received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims I was happy to have the opportunity know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" . The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments. She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • Replication with SQL Server 2005 Express Edition and SQL Compact Edition 3.5

    - by Andy Gable
    I need some information on SQL Server 2005 Express edition. What I want to do is have my central database servin local machine databases IE back office Cental database |------------------- Shop floor Terminal 1 |------------------- Shop Floor Terminal 2 |------------------- Shop Floor Terminal 3 |------------------- Shop Floor Terminal 4 |------------------- Shop Floor Terminal 5 |------------------- Shop Floor Terminal 6 I want is so that Shop floor terminals would PULL down ANY changes to the database as and when they happen (selected changes are needed change would be Add new item / Edit Item info that is used by Shop floor terminal (ie price, description, sale group) Is this possible with SQL 2005? I have the ability to make my own Sync Applciation but I would need to know what to look for in the database that trigers a update Many thanks for any advice you can give Andy

    Read the article

  • Database Documentation - Lands of Trolls: Why and How?

    When database documentation is mentioned in an IT Department, everybody nods wisely, yet everyone does their best to avoid doing it. Attention to the database documentation can be the best invertment in time a development group can make. It is essential, and no system can be properly maintained without it. Feodor gives a sensible explanation and guideline for the unloved task of creating database documentation.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Announcing Oracle Audit Vault and Database Firewall

    - by Troy Kitch
    Today, Oracle announced the new Oracle Audit Vault and Database Firewall product, which unifies database activity monitoring and audit data analysis in one solution. This new product expands protection beyond Oracle and third party databases with support for auditing the operating system, directories and custom sources. Here are some of the key features of Oracle Audit Vault and Database Firewall: Single Administrator Console Default Reports Out-of-the-Box Compliance Reporting Report with Data from Multiple Source Types Audit Stored Procedure Calls - Not Visible on the Network Extensive Audit Details Blocking SQL Injection Attacks Powerful Alerting Filter Conditions To learn more about the new features in Oracle Audit Vault and Database Firewall, watch the on-demand webcast.

    Read the article

  • Backing up an online database

    - by Veejay
    I havea 70MB db of my website which is hosted with a provider. I am able to access my db using SSMS 2008 remotely. On a running website, which is the best way I can back up the db locally on machine Thanks

    Read the article

  • Windows API BackupRead failing with error 50 on Windows Storage Server 2008 R2

    - by Jason F
    We have a backup application that uses the Windows API BackupRead. It works correctly on Windows Server 2003, 2008, 2008 R2. It does not work on Storage Server 2008 R2. It always fails with error 50 - The request is not supported. The documentation for BackupRead gives no indication that it will not work with Storage Server 2008 R2. Anyone else have any experience using this API on Storage Server 2008 R2?

    Read the article

  • SQL Server 2000 restore error

    - by kv
    i'm trying to do point in time restore and following error occurs... and database goes into xxxxx(loading) state... Backup set cannot be applied because it is on a recovery path inconsistent with database i have to do RESTORE DATABASE xxxxx WITH RECOVERY to make it proper... Why its happening?

    Read the article

  • BizTalk Server 2009 highly available configuration

    - by Matthew
    Hi. I am setting up a BizTalk 2009 Server production environment and I wonder if this tutorial: http://msdn.microsoft.com/en-us/library/cc558972(BTS.10).aspx is still fairly accurate for an installation on all the latest platform software (Hyper-V, Windows Server 2008, BizTalk Server 2009, SQL Server 2008)? Is there anything like this but more up to date? Any other advice about this process and links to articles, blogs & tutorials would be appreciated.

    Read the article

  • SQLAuthority News – SQL Server Wait Stats – eBook to Download on Kindle – Answer to FREE PDF Download Request

    - by pinaldave
    Being a book author is a completely new experience for me. I am yet to come across the issues faced by expert book authors. I assume that these interesting issues can be routine ones for expert book authors. One of the biggest requests I am getting for my SQL Server Wait Stats [Amazon] | [Flipkart] | [Kindle] book is my humble attempt to write a book. This is our very first experiment, and the book is beginning of the subject of SQL Server Wait Stats; we will come up with a new version of the book later next year when we have enough information for the SQL Server 2012 version. Following are the top 2 requests that I keep on receiving in emails, on blogs, Twitter, and Facebook. “Please send us FREE PDF of your book so we do not have to purchase it.” “If you can share with us the eBook (free and downloadable) format of your book, we will share it with everybody we know and you will get additional exposure.” Here is my response for the abovementioned requests: If you really need my book and cannot purchase it due to financial trouble, then feel free to let me know and I will purchase it myself and ship it to you. If you are in a country where the print book not available, then you can buy the Kindle book, which is available online in any country, and you can just read it on your computer and mobile devices. You DO NOT have to own a Kindle to read a Kindle format book. You can freely download Kindle software on your desired format and purchase the book online. For next 5 days, the kindle book is available at 3.99 in USA, and in other countries, the price is anywhere between 3.99 and 5.99. The price will go up by USD 2 everywhere across the world after 1st November, 2011. Here is the link to download Kindle Software for free PC, WP7, and in marketplace for various other mobile devices. I thank you for giving warm response to SQL Server Wait Stats book. I am motivated to write the next expanded version of this book. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Join the Webcast on September 18 to Learn Benefits of Upgrading to Oracle Database 11g

    - by Cinzia Mascanzoni
    Attend the Webcast on Tuesday September 18, 2012, at 10 a.m. PT to learn the "Three Compelling Reasons to Upgrade to Oracle Database 11g." During the live Webcast, Oracle experts will explain how customers who are still working with Oracle Database 10g or an even older version can gain the business, operational, and technical benefits provided by Oracle Database 11g. If you cannot participate in the live event, a replay will be available on the same registration page shortly afterward.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >