Search Results

Search found 1024 results on 41 pages for 'dba'.

Page 9/41 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • opening Dbf files in oracle 10g

    - by nagaraju
    This nagaraju,from India,Hyderabad. I have installed oracle 10g trail version in my system(E drive),created one database with my name(database:-nagaraju),in that created tables, prodecures ,functions ,sequences etc for my project. Due to some sudden problem,i formatted my machine C drive,now iam not ablle to open my database, i need all procedures ,tables which i created in that. Now I newly installed oracle10g again in another folder,how can i copy my old database into my inew installation database. Or can i copy the script of procedures so that ican run in new database. I have all data in Oradata folder,like DBF files etc. Could you please help me, how to do that?

    Read the article

  • Linux: How to rename old mysqld when upgrading MySQL?

    - by Continuation
    I'm upgrading MySQL from MySQL 5.0 to Percona Server 5.1. I'm planning to just use yum remove and yum install to do the upgrade. However, I read in the documentation that it's a good idea to rename the old mysqld to mysqld-5.0. And if the upgrade doesn't work, I could just revert back to the old version. How exactly does this work? If I use yum remove, doesn't that mean the old mysqld is removed? So how do I rename it? Where is mysqld located? How do I find it? Thanks.

    Read the article

  • SQl server 2008 permission and encryption

    - by Paranjai
    i have made columns in some of the tables encrypted in sql server 2008. Now as i am a db owner i have the access to encode and decode the data using the symmetric key and certificate. But some other users have only currently datareader and datawriter rights ,and when they execute any SP referring the logic which uses the key and certificate "User does has not right on the certificate to execute". What rights / exact permission should i grant them just to solve this problem

    Read the article

  • Can't see progress of rolling back SPID with KILL WITH STATUSONLY

    - by BradC
    I have a SPID on SQL 2005 that shows in Activity Monitor as "ROLLBACK" mode (because a transaction log filled up, not because it was manually killed). I tried to see how much time it has left to roll back with a KILL 115 WITH STATUSONLY but it just said "Status report cannot be obtained. Rollback operation for Process ID 115 is not in progress." Can I safely issue a "KILL 115" so that I can see the rollback status? Does this actually do anything on a spid currently in rollback?

    Read the article

  • How to tell if any MySQL connections has been dropped or timed out?

    - by Continuation
    A client is using PHP to connect to MySQL. The PHP scripts and the MySQL database are located on 2 different Linux servers. He complained that database connections were being dropped or timed out and asked me to take a look. Is there any place in MySQL that can show me what and how many connections have been dropped or timed out? I looked into slow query log and didn't see anything. Any suggestions on how to diagnose this dropped/timed out database connection problem? Thanks EDIT: Slow query log is enabled in my.cnf: log-slow-queries=/var/log/mysql-slow-queries.log And when I do a mysql> show global status; I got: | Slow_queries | 11402347 | So there are a lot of slow queries. But the file /var/log/mysql-slow-queries.log doesn't exist. Why is that?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • MySQL: how to convert many MyISAM tables to InnoDB in a production database?

    - by Continuation
    We have a production database that is made up entirely of MyISAM tables. We are considering converting them to InnoDB to gain better concurrency & reliability. Can I just alter the myISAM tables to InnoDB without shutting down MySQL? What are the recommend procedures here? How long will such a conversion take? All the tables have a total size of about 700MB There are quite a large number of tables. Is there any way to apply ALTER TABLE to all the MyISAM tables at once instead of doing it one by one? Any pitfalls I need to be aware of? Thank you

    Read the article

  • Copying SharePoint DB to a new SharePoint 2010 server

    - by LJe
    Hi - we would like to know what is the best and easy way to configure a new SharePoint 2010 Server, we have backup the existing DB of SharePoint 2007 (up and running). We would like to mirror the same settings and content of our current SharePoint setup to the new server using the SharePoint 2010.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • ????????!?Oracle DBA & Developer Day 2012? ????????

    - by OTN-J Master
    ?????!DBA & Developer Day ??????????????????????????????!????????????DBA & Developer Day ???????????????????????????????????????????????????????????????????????????????????? OTN ??????????????????? ???????????????????????????????????????????????????????????????????Oracle Database Cloud Service??????????????????????????????·?????????Michael Hichwa??????Oracle OpenWorld 2012 ????????????????????????????????????Oracle Database Cloud Service???????????????????????????????????????????????????????????????????????Pluggable Database(???????????)??????????????????????????? ????2?????????6????????????24???????????????????????? Oracle Database ???????? ???????(MAA) Oracle Database ???????? BIG DATA Oracle Fusion Middleware???????? Oracle Solaris OTN ?????????????????????????????????????? – ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? (???????????????????????)??????SPARC T4-1????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????24?????????(??·??)??????????! ????????????????????????????????????? (????: ?????????????????????????????????????????????)  

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • SSH from ubuntu server to Windows 2008 repeatedly asks for password

    - by jrizos
    I am trying to setup GIT using SSH mode. The central GIT repository is on a NAS device running Windows 2008 server and the user GIT repository is on ubuntu 12.04. When I try to SSH to the windows machine however I am not able to successfully get in. SSH keays are not setup but I think the problem is even before that since I cant get in just by providing the correct password. The output from the SSH command is below. Any help would be appreciated. dba@clpserv01:~$ ssh -v -l administrator clpnas OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to clpnas [***.***.***.***] port 22. debug1: Connection established. debug1: identity file /home/dba/.ssh/id_rsa type -1 debug1: identity file /home/dba/.ssh/id_rsa-cert type -1 debug1: identity file /home/dba/.ssh/id_dsa type -1 debug1: identity file /home/dba/.ssh/id_dsa-cert type -1 debug1: identity file /home/dba/.ssh/id_ecdsa type -1 debug1: identity file /home/dba/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze2 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA bd:37:d1:98:51:2a:d6:b5:f5:c7:98:d8:74:2c:4e:cd debug1: Host 'clpnas' is known and matches the RSA host key. debug1: Found key in /home/dba/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /home/dba/.ssh/id_rsa debug1: Trying private key: /home/dba/.ssh/id_dsa debug1: Trying private key: /home/dba/.ssh/id_ecdsa debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,password,keyboard-interactive Password:

    Read the article

  • Wisdom of merging 100s of Oracle instances into one instance

    - by hoytster
    Our application runs on the web, is mostly an inquiry tool, does some transactions. We host the Oracle database. The app has always had a different instance of Oracle for each customer. A customer is a company which pays us to provide our service to the company's employees, typically 10,000-25,000 employees per customer. We do a major release every few years, and migrating to that new release is challenging: we might have a team at the customer site for a couple weeks, explaining new functionality and setting up the driving data to suit that customer. We're considering going multi-client, putting all our customers into a single shared Oracle 11g instance on a big honkin' Windows Server 2008 server -- in order to reduce costs. I'm wondering if that's advisable. There are some advantages to having separate instances for each customer. Tell me if these are bogus, please. In my rough guess about decreasing importance: Our customers MyCorp and YourCo can be migrated separately when breaking changes are made to the schema. (With multi-client, we'd be migrating 300+ customers overnight!?!) MyCorp's data can be easily backed up and (!!!) restored, without affecting other customers. MyCorp's data is securely separated from their competitor YourCo's data, without depending on developers to get the code right and/or DBAs getting the configuration right. Performance is better because the database is smaller (5,000 vs 2,000,000 rows in ~50 tables). If MyCorp's offices are (mostly) in just one region, then the MyCorp's instance can be geographically co-located there, so network lag doesn't hurt performance. We can provide better service to global clients, for the same reason. In MyCorp wants to take their database in-house, then we can easily export their instance, to get MyCorp their data. Load-balancing is easier because instances can be placed on different servers (this is with a web farm). When a DEV or QA instance is needed, it's easier to clone the real instance and anonymize the data, because there's much less data. Because they're small enough, developers can have their own instance running locally, so they can work on code while waiting at the airport and while in-flight, without fighting VPN hassles. Q1: What are other advantages of separate instances? We are contemplating changing the database schema and merging all of our customers into one Oracle instance, running on one hefty server. Here are advantages of the multi-client instance approach, most important first (my WAG). Please snipe if these are bogus: Less work for the DBAs, since they only need to maintain one instance instead of hundreds. Less DBA work translates to cheaper, our main motive for this change. With just one instance, the DBAs can do a better job of optimizing performance. They'll have time to add appropriate indexes and review our SQL. It will be easier for developers to debug & enhance the application, because there is only one schema and one app (there might be dozens of schema versions if there are hundreds of instances, with a different version of the app for each version of the schema). This reduces costs too. The alternative is having to start every debug session with (1) What version is this customer running and (2) Let's struggle to recreate the corresponding development environment, code and database. (We need a Virtual Machine that includes the code AND database instance for each patch and release!) Licensing Oracle is cheaper because it's priced per server irrespective of heft (or something -- I don't know anything about the subject). The database becomes a viable persistent store for web session data, because there is just one instance. Some database operations are easier with one multi-client instance, like finding a participant when they're hazy about which customer they (or their spouse, maybe) works for: all the names are in one table. Reporting across customers is straightforward. Q2: What are other advantages of having multiple clients in one instance? Q3: Which approach do you think is better (why)? Instance per customer, or all customers in one instance? I'm concerned that having one multi-client instance makes migration near-impossible, and that's a deal killer... ... unless there is a compromise solution like having two multi-client instances, the old and the new. In that case case, we would design cross-instance solutions for finding participants, reporting, etc. so customers could go from one multi-client instance to the next without anything breaking. THANKS SO MUCH for your collective advice! This issue is beyond me -- but not beyond the collective you. :) Hoytster

    Read the article

  • Mysql locale session variable ?

    - by Maxim Veksler
    Hunting internationalization bugs here. Does mysql has a variable which can be set per session, meaning the each connection will know the timezone of it's client and will act upon that. If such variable does exists I would expect sql statements such as the following will return diffect values, based on connection session locale. select date('2010-04-14') + 0; Thank you, Maxim.

    Read the article

  • Changing default logical filename in SQL 2005

    - by Andrew
    I have a issue about creating databases in SQL 2005. I want to be able to change the default logical filename for the mdf file. At the moment the log logical filename ends in _log by default. I want the data logical filename to automatically end with _data for consistency. Is there a way i can set this? Andrew

    Read the article

  • Oracle XE as local recovery database and Oracle Standard as main db

    - by jbendahan
    Hi, I just wanted to know what you guys think about this. I have an app written in Visual Basic .Net as my front end and and Oracle 11g Standart database as the back-end. So I have like 20 pc's running this app locally. They're all inserting, updating, deleting data on this single database. I want to develop a solution in the case that the server database crashes or cannot stay on line. So i think of having oracle 10g XE on each pc. Thus all the data will be stored in the local db. I think about running a proccess once in a while (ex. every 15 minutes) to send/get the data to/from the main server. What do you think about this strategy?

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • SQL Server Query Editors - any that warn of number of rows to be changed?

    - by ciaranarcher
    We're using SQL Server 2000 Query Analyser, and one issue we have is that very occasionally, when a user is updating our live database, they insert the incorrect/no(!) where clause. I know, not good, but it happens. Are there any editors that will warn of the number of rows that might be changed (if that is even possible) or even a way to configure an editor, if it is connected to a certain database, to prompt for confirmation before the query is run? We have a way to recover our data in the cases where we run an incorrect query, but it takes time, and I'm just seeing if there are any ways to catch the error, or at least give the user a second chance. Thanks in advance.

    Read the article

  • Script SQL Server login with Windows authentication without machine name

    - by Evgeny
    I want to write a SQL 2005 script to create a new login that uses Windows authentication. The Windows user is a local account (not a domain one). A local account with the same name exists on many SQL Server machines and I want to run the same script on all of them. It seemed simple enough: CREATE LOGIN [MyUser] FROM WINDOWS However, that doesn't work! SQL returns an error, saying Give the complete name: <domain\username>. Of course, I can do that for one machine and it works, but the same script will not work on other machines.

    Read the article

  • Control SQL Server CLR Reserved Memory

    - by Ryu
    I've recently enabled CLR on my 64 bit SQL Server 2005 machine for usage of about 3 procs. When I run the following query to gather some info on memory usage... select single_pages_kb+ multi_pages_kb + virtual_memory_committed_kb as TotalMemoryUsage, virtual_memory_reserved_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR' I get 129 mb MemoryUsage and 6.3 gb Virtual Memory Reserved The total memory of the machine is 21 gig. What does reserved virtual memory mean exactly and how can I control the size that is allocated? 6 gig is overkill for what we're doing and the memory would be much better utilized by the sproc cache. I'm concerned this reserved memory will cause swapping to the page file. Please help me take back control of the memory! Thanks

    Read the article

  • reclaim unsued space in sql 2008

    - by opensas
    I have a table with more than 300.000 records, that weights approximately 1.5 GB In that table I have three varchar(5000) fields, the rest are small fields I issue an update, setting those three fields to '' An after a shrink (database and files) the database uses almost the same space as before... DBCC SHRINKDATABASE(N'DataBase' ) DBCC SHRINKFILE (N'DataBase' , 1757) DBCC SHRINKFILE (N'DataBase_log' , 344) any idea?

    Read the article

  • Ajax phpmyadmin alternative?

    - by timonweb
    I must say, I'm bored of phpmyadmin. We are in 2009 and have to work with this usefull tool and to wait everypage to reload after every action. Are any ajaxed alternatives outhere? Maybe phpmyadmin himself is going to be ajaxized?

    Read the article

  • Exceptions handling in SQL?

    - by Vineet
    Is there any way to handle exceptions in sql(ORACLE 9i)? Since I was trying to divide values of a column that contains both numbers and literals ,I need to fetch out only numbers from it ,as if it divisible by any number then its number else if contains literals it would not get divided it will generate error. how to handle those errors? Please suggest!!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >