Search Results

Search found 1027 results on 42 pages for 'msps dba'.

Page 37/42 | < Previous Page | 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • Partioning with Hibernate

    - by Alex
    Hello, We have a requirement to delete data in the range of 200K from database everyday. Our application is Java/JEE based using Oracle DB and Hibernate ORM tool. We explored various options like Hibernate batch processing Stored procedure Database partitioning Our DBA suggests database partitioning is the best way to go, so we can easily recreate and drop the partitioned table everyday. Now the issue is we have 2 kinds of data, one which we want to delete everyday and the other which we want to keep it. Suppose this data is stored in table "Trade". Now with partitioning, we have 2 tables "Trade". We have already existing Hibernate based DAO layer to fetch/store trades from/to DB. When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate. Basically I want , the trades need to be deleted by end of the day, to go in partitioned table and the trades I want to keep, in main table. Please suggest how can this be possible with Hibernate. We may add an additional column to identify the trades to be deleted but how can we ensure these trades should go to partitioned trade table using hibernate. I would appreciate if someone can suggest any better approach in case we are on wrong path.

    Read the article

  • How can one convince a team to use a new technology (LinQ, MVC, etc )?

    - by Atomiton
    Obviously, it's easier to do with some developers, but I'm sure many of us are on teams that prefer the status quo. You know the type. You see some benefit in a piece of new technology and they prefer the tried and true methods. Try, for example, DBA/C# programmer the advantages of using LinQ ( not necessarily LinQ to SQL, just LinQ in general ). For example, When a project requirement is to be cross-platform... instead of thinking about how one can run Windows on a Mac through a VM Machine, introducing the idea of using relatively new Silverlight or creating it in Java ( as an option to look into ). I know most people don't like to be out of their comfort level, so it takes a bit of convincing, and not ALL new technology makes business sense... but how have you convinced your team to look at a new technology? What technologies have you successfully introduced to your workplace? What technologies do you think are hardest to introduce? ( I'm thinking paradigm-shifting ones, like MVC from WebForms... or new languages ) What strategies do you employ to make these new technologies appealing?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • ASP can't connect to SQL Server database

    - by birdus
    I'm trying to get a classic ASP application to connect to a local SQL Server 2008 database. The app and database were built by someone else. I'm just trying to get them installed and running on my machine (Windows 7). I'm getting the following error when when the ASP app tries to connect to the database: Could not connect to database: Error Number: -2147467259 Error Message: [ConnectionOpen (Connect()).] does not exist or access denied. I don't see any messages in the Windows Event Viewer. I'm looking at: Event Viewer-Windows Logs-Application. It's a fresh database install using a simple restore. The SQL Server install uses the default instance. SQL Server and Windows authentication are both allowed. I left the existing connection string (in the ASP code) in tact and just tried adding that to my SQL Server installation. Here's the connection string: strConn = "PROVIDER=SQLOLEDB;SERVER=localhost;UID=TheUser;PWD=ThePassword;DATABASE=TheDatabase;" To add that user to SQL Server, I went to Security/Logins in SSMS and added the user and the password. I selected the database in question as the Default database. I thought that might do the trick, but it didn't. Then, I went into TheDatabase, then went into Security there. I added a new user there, referencing the new user I had already added in server Security. Under Owned Schemas, I clicked db_owner and under Role Members I checked db_accessadmin and db_owner. None of this gave the ASP application access to the database. The sid values match in sys.database_principals and sys.server_principals for the login in question. I am able to login to SSMS using this login. The app needs to execute selects against the database like this: oConn.Execute('select * from someTable') I'm not a DBA and am sort of grasping at straws here. How do I get this thing connected? Thanks, Jay

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Please recommend the one SQL book for a developer without a lot of SQL experience.

    - by Hamish Grubijan
    I have too many hobbies outside of my profession, so I am hoping to read just one good book, and get a tad better at SQL. My background: took one boring, theoretical class in databases, was exposed to SQL professionally (in addition to several other languages and technologies) for a year and a half. I've done about 5 years of C#/Java stuff professionally. By "professionally" I mean doing it full-time while someone paid me more than $25/hr for it - not necessarily that I created masterpieces along the way :) I want to become better at SQL (coding aspect; DBA is not of particular importance to me right now). I am looking for one book to give me a solid foundation in it. When I needed to learn some C from almost a scratch, I used (and loved) this book: http://www.amazon.com/Programming-Language-2nd-Brian-Kernighan/dp/0131103628 I am hoping to find one just like this for SQL. I am not doing web development now or in a near future, and I am looking for something that is hopefully not specific to any one sub-industry. Thanks in advance.

    Read the article

  • Are there any less costlier alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • Unable to insert DateTime format into database

    - by melvg
    I'm unable to insert the DateTime into my database. Am i writing the statement wrongly? Apparently without the DateTime, I am able to insert into the database string dateAndTime = date + " " + time; CultureInfo provider = CultureInfo.InvariantCulture; DateTime theDateTime = DateTime.ParseExact(dateAndTime, "d MMMM yyyy hh:mm tt", provider); //Create a connection, replace the data source name with the name of the SQL Anywhere Demo Database that you installed SAConnection myConnection = new SAConnection("UserID=dba;Password=sql;DatabaseName=emaDB;ServerName=emaDB"); //open the connection ; myConnection.Open(); //Create a command object. SACommand insertAccount = myConnection.CreateCommand(); //Specify a query. insertAccount.CommandText = ("INSERT INTO [meetingMinutes] (title,location,perioddate,periodtime,attenders,agenda,accountID,facilitator,datetime) VALUES ('"+title+"','" + location + "', '" + date + "','" + time + "', '" + attender + "','" + agenda + "', '" + accountID + "','" + facilitator + "','" +theDateTime+ "')"); try { insertAccount.ExecuteNonQuery(); if (title == "" || agenda == "") { btnSubmit.Attributes.Add("onclick", "displayIfSuccessfulInsert();"); //ScriptManager.RegisterStartupScript(this, GetType(), "error", "alert('Please ensure to have a title or agenda!');", true); } else { btnSubmit.Attributes.Add("onclick", "displayIfSuccessfulInsert();"); Response.Redirect("HomePage.aspx"); //ScriptManager.RegisterStartupScript(this, this.GetType(), "Redit", "alert('Minutes Created!'); window.location='" + Request.ApplicationPath + "/HomePage.aspx';", true); } } catch (Exception exception) { Console.WriteLine(exception); } finally { myConnection.Close(); } It does not insert the SQL into my database.

    Read the article

  • Is there a way to create subdatabases as a kind of subfolders in sql server?

    - by user193655
    I am creating an application where there is main DB and where other data is stored in secondary databases. The secondary databases follow a "plugin" approach. I use SQL Server. A simple installation of the application will just have the mainDB, while as an option one can activate more "plug-ins" and for every plug-in there will be a new database. Now why I made this choice is because I have to work with an exisiting legacy system and this is the smartest thing I could figure to implement the plugin system. MainDB and Plugins DB have exactly the same schema (basically Plugins DB have some "special content", some important data that one can use as a kind of template - think to a letter template for example - in the application). Plugin DBs are so used in readonly mode, they are "repository of content". The "smart" thing is that the main application can also be used by "plugin writers", they just write a DB inserting content, and by making a backup of the database they creaetd a potential plugin (this is why all DBs has the same schema). Those plugins DB are downloaded from internet as there is a content upgrade available, every time the full PlugIn DB is destroyed and a new one with the same name is creaetd. This is for simplicity and even because the size of this DBs is generally small. Now this works, anyway I would prefer to organize the DBs in a kind of Tree structure, so that I can force the PlugIn DBs to be "sub-DBs" of the main application DB. As a workaround I am thinking of using naming rules, like: ApplicationDB (for the main application DB) ApplicationDB_PlugIn_N (for the N-th plugin DB) When I search for plugin 1 I try to connect to ApplicationDB_PlugIn_1, if I don't find the DB i raise an error. This situation can happen for example if som DBA renamed ApplicationDB_Plugin_1. So since those Plugin DBs are really dependant on ApplicationDB only I was trying to "do the subfolder trick". Can anyone suggest a way to do this? Can you comment on this self-made plugin approach I decribed above?

    Read the article

  • Database (and ORM) choice for an small-medium size .NET Application

    - by jim
    I have a requirement to develop a .NET-based application whose data requirements are likely to exceed the 4 gig limit of SQL 2005 Express Edition. There may be other customers of the same application (in the future) with a requirement to use a specific DB platform (such as Oracle or SQL Server) due to in-house DBA expertise. Questions What RDBMS would you guys recommend? From the looks of it the major choices are PostGreSQL, MySQL or FireBird. I've only got experience of MYSQL from these. Which ORM tool (if any) would you recommend using - ideally one that can be swapped out between DB platforms with minimal effort? I like the look of the entity framework but unsure as to the degree to which platforms other than SQL Server are supported. If it helps, we'll be using the 3.5 version of the Framework. I'm open to the idea of using a tool such as NHibernate. On the other hand, if it's going to be easier, I'm happy to write my own stored procedures / DAL code - there won't be that many tables (perhaps 30-35).

    Read the article

  • SQL Server 2005: Internal Query Processor Error:

    - by Geetha
    I am trying to execute this following procedure in SQL Server 2005. I was able to execute this in my development server and when i tried to use this in the Live Server I am getting an Error "Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services". am using the same Database and the same format. when we searched in the web it shows some fixes to be used in sql server 2005 to avoid this error but my DBA has confirmed that all the patches are updated in our server. can anyone give me some clue on this. Query: create Procedure [dbo].[sample_Select] @ID as varchar(40) as Declare @Execstring as varchar(1000) set @Execstring = ' Declare @MID as varchar(40) Set @MID = '''+@ID+''' select * from ( select t1.field1, t1.field2 AS field2 , t1.field3 AS field3 , L.field1 AS field1 , L. field2 AS field2 from table1 AS t1 INNER JOIN MasterTable AS L ON L. field1 = t1. field2 where t1. field2 LIKE @MID ) as DataTable PIVOT ( Count(field2) FOR field3 IN (' Select @Execstring=@Execstring+ L.field2 +',' FROM MasterTable AS L inner join table1 AS t1 ON t1.field1= L.field2 Where t1.field2 LIKE @ID set @Execstring = stuff(@Execstring, len(@Execstring), 1, '') set @Execstring =@Execstring +')) as pivotTable' exec (@Execstring)

    Read the article

  • Which one gives you more advantages: Microsoft or SUN/Oracle certification?

    - by edwin.nathaniel
    Which one gives you more advantages: Microsoft or SUN/Oracle certification? If you don't mind, please relate your answer to your career (dev, sys-admin, dba), industry (finance, general consulting, business apps, software house), and location (where you live, does it influence your decision?) as well. One reason I asked this because I heard that having MCP (in C#, MCSAD or whatever the cert is called these days) could give you extra point from the employer perspective if they're a Microsoft Partner. I don't quite understand why and am also wondering if SUN/Oracle certificate holder have similar edge/benefit. I'm also interested to know if Certification plays role where you live. For example: certification could be a big negative sign in Silicon Valley, but might be helpful if you're in other US/Canada cities or outside USA/Canada. If you are an independent software consultant, do these certification helps you win bids? I totally understand that this question might be subjective and could be "closed". But it doesn't hurt to ask I suppose. I also understand that there are people who dislike/disagree (or have negative perspective on technology certification). While there are people who have the opinion that "these are not the kind of employers you would want to work with", I respect you guys, but sometime a pure software product company might not exist in certain cities in the world. And if certification can help one's career somewhere else, I guess that means more opportunities right? I've had experienced in both platforms: (short) .NET and (not too long) Java (minus Oracle DB stuff), but I think it's about time I have to pick one of them and be good at. Where I live right now might also dictates my career albeit insignificant. Anyhow, this last paragraph is just tiny blurb. Thanks!

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • Compile Apache 2.4.3 on Centos 6.2 (64bit)

    - by RiseCakoPlusplus
    I attempt to compile Apache 2.4.3 with apr-1.4.6 and apr-util-1.5.1 on Centos 6.2 (64bit). ./configure --build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --without-pear --with-bz2 --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --with-xpm-dir=/usr --enable-gd-native-ttf --with-t1lib=/usr --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --with-kerberos --enable-ucd-snmp-hack --enable-shmop --enable-calendar --with-libxml-dir=/usr --enable-xml --with-system-tzdata --with-mhash --with-apxs2=/usr/sbin/apxs --libdir=/usr/lib64/php --enable-pdo=shared --with-mysql=shared,/usr --with-mysqli=shared,/usr/lib64/mysql/mysql_config --with-pdo-mysql=shared,/usr/lib64/mysql/mysql_config --without-pdo-sqlite --without-gd --disable-dom --disable-dba --without-unixODBC --disable-xmlreader --disable-xmlwriter --without-sqlite3 --disable-phar --disable-fileinfo --disable-json --without-pspell --disable-wddx --without-curl --disable-posix --disable-sysvmsg --disable-sysvshm --disable-sysvsem ./configure --with-included-apr --with-included-apr-util and when I issue make this happen: /root/httpd-2.4.3/srclib/apr/libtool: line 5989: cd: yes/lib: No such file or directory libtool: link: cannot determine absolute directory name of yes/lib' make[3]: *** [libaprutil-1.la] Error 1 make[3]: Leaving directory/root/httpd-2.4.3/srclib/apr-util' make[2]: * [all-recursive] Error 1 make[2]: Leaving directory /root/httpd-2.4.3/srclib/apr-util' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/httpd-2.4.3/srclib' make: * [all-recursive] Error 1 anything I missed?

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • Oracle Application Server Performance Monitoring and Tuning (CPU load high)

    - by Berkay
    Oracle Application Server Performance Monitoring and Tuning (CPU load high) i have just hired by a company and my boss give me a performance issue to solve as soon as possible. I don't have any experience with the Java EE before at the server side. Let me begin what i learned about the system and still couldn't find the solution: We have an Oracle Application Server (10.1.) and Oracle Database server (9.2.), the software guys wrote a kind of big J2EE project (X project) using specifically JSF 1.2 with Ajax which is only used in this project. They actively use PL/SQL in their code. So, we started the application server (Solaris machine), everything seems OK. users start using the app starting Monday from different locations (app 200 have user accounts,i just checked and see that the connection pool is set right, the session are active only 15 minutes). After sometime (2 days) CPU utilization gets high,%60, at night it is still same nothing changed (the online user amount is nearly 1 or 2 at this time), even it starts using the CPU allocated for other applications on the same server because they freed If we don't restart the server, the utilization becomes %90 following 2 days, application is so slow that end users starts calling. The main problem is software engineers say that code is clear, and the System and DBA managers say that we have the correct configuration,the other applications seems OK why this problem happens only for X application. I start copying the DB to a test platform and upgrade it to the latest version, also did in same with the application server (Weblogic) if there is a bug or not. i only tested by myself only one user and weblogic admin panel i can track the threads and dump them. i noticed that there are some threads showing as a hogging. when i checked the manuals and control the trace i see that it directs me the line number where PL/SQL code is called from a .java file. The software eng. says that yes we have really complex PL/SQL codes but what's the relation with Application server? this is the problem of DB server, i guess they're right... I know the question has many holes, i'd like to give more in detail but i appreciate the way you guide me. Thanks in advance ... Edit: The server both in CPU and Memory enough to run more complex applications

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

  • PHP 5.3 Not Logging

    - by BHare
    I have set error_log = "/var/log/apache2/php_errors.log" and made sure errors were being logged. I have set the file to be owned by the www-data owner and group and even set the permissions to 777. I have confirmed with phpinfo() that the error_log is correctly set, however The logging still only happens in my vhost's apache error log. The following is my php.ini for 5.3.3-7 on Debian Squeeze Apache 2: The top is populated with comments on what I have been interested, or have changed. I have deleted all comments to save space. Full versions here: http://pastebin.com/AhWLiQBR [PHP] ;short_open_tag = On ;allow_call_time_pass_reference = On ;error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ;display_errors = On ;display_startup_errors = Off ;log_errors = On ;html_errors = On error_log = "/var/log/apache2/php_errors.log" engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = On safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED display_errors = On display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On variables_order = "GPCS" request_order = "GPC" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 100M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_tmp_dir = /tmp upload_max_filesize = 100M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba]

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42  | Next Page >