Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 49/589 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I get information about the level to the player object?

    - by pangaea
    I have a design problem with my Player and Level class in my game. So below is a picture of the game. The problem is I don't want to move on the black space and only the white space. I know how to do this as all I need to do is get the check for the sf::Color::Black and I have methods to do this in the Level class. The problem is this piece of code void Game::input() { player.input(); } void Game::update() { (*level).update(); player.update(); } void Game::render() { (*level).render(); player.render(); } So as you there is a problem in that how do I get the map information from the Level class to the Player class. Now I was thinking if I made the Player position static and pass it into the Level as parameter in update I could do it. The problem is interaction. I don't know what to do. I could maybe make player go into the Level class. However, what if I want multiple levels? So I have big design problems that I'm trying to solve.

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • Google Apps Domain Level Shared Contacts?

    - by dkirk
    My firm just switched to Google Apps Premiere addition 2 weeks ago and aside from the way Google handles shared contacts, things are going quite well. Previously, on our Exchange server we had numerous shared contact lists set up in the shared folders. We had a separate list for vendors, sales agents, etc.. Is there not a way to set up lists or groups such as this on the domain level in Google Apps? I have found a ton of forums with users asking the same question but no good answers unless you purchase some third party app in the marketplace. I have toyed around with the "google-shared-contacts-client" here: http://code.google.com/p/google-shared-contacts-client/ and this almost does it but it falls short when trying to group contacts on the domain level or when trying to search for a contact by company name. Are either of these things possible? I am now looking to create a Google Doc spreadsheet to share with the domain just to have a separated defined list of contacts that is search-able by various fields... Anyone who could shed some light on domain level contact sharing relating to the points above, I would be most grateful...

    Read the article

  • Does armcc optimizes non-volatile variables with -O0 ?

    - by Dor
    int* Register = 0x00FF0000; // Address of micro-seconds timer while(*Register != 0); Should I declare *Register as volatile while using armcc compiler and -O0 optimization ? In other words: Does -O0 optimization requires qualifying that sort of variables as volatile ? (which is probably required in -O2 optimization)

    Read the article

  • Timer a usage in msp430 in high compiler optimization mode

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code, //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

  • My timer code is failing when IAR is configured to do max optimization

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code [Code] //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled [/code] Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • Link between low level drivers and tty drivers

    - by agent.smith
    I was writing a console driver for linux and I came across the tty interface that I need to set up for this driver. I got confused as to how tty drivers are bound with low-level drivers. Many times the root file system already contains a lot of tty devices. I am wondering how low-level devices can bind to one of the existing tty nodes on the root file system. For example, /dev/tty7 : Node on the root file system. How does a low-level device driver connect with this node? Or should that low-level device define a completely new tty device?

    Read the article

  • choosing the right RAID level

    - by student
    Recently, we bought a "HP-DL380 G6 Server" with 6 146GB (SCSI)HDD for our colleague course management application and website with 10000 daily visitors. we want to choose the best RAID level. how can we choose the right RAID level ? what is the best RAID level for our application ?

    Read the article

  • Row level user permissions, help with design

    - by bambam
    Hi, Say I am creating a forums application, I understand how to design a forum level permission system with Groups. i.e. you create a forum to group mapping, and assign users to a group to give them access to a particular forum. How can I refine the permissions to allow for row level permissions (or in forum terms, post level).

    Read the article

  • Redirect anything within /attachments/ directory up one level but also removing anything after /attachments/

    - by Rich Staats
    WordPress creates an attachment page for any images uploaded through the post editor and "attached" to that post. After migrating a site, those attachment pages no longer exist, and we now have around 1000 links pointed at 404's. So, I was looking to find a way to do a redirect for any url that has /attachement/ in it's string and then push that url back up one level (which happens to be the post page). so for instance: http://mysite.com/2012/news/blog-post-title/attachment/image-page/ (which doesnt exist) will go to http://mysite.com/2012/news/blog-post-title/ (which does exist). Along with redirecting up one level, I also need to remove anything after /attachment/ (in this case the "image-page." Any suggestions? Thanks in advance

    Read the article

  • What does "cpuid level" mean?

    - by ogzylz
    For an example, I put just output from 1 core of a 16 core machine. What does the output mean by "cpuid level" of "6"? Also, what do "bogomips" of "5992.10" and "clflush size" of "64" mean? processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception: yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment: 128 address sizes: 40 bits physical, 48 bits virtual power management:

    Read the article

  • Grails / GORM, Disable First-level Cache

    - by Stephen Swensen
    Suppose I have the following Domain class mapping to a legacy table, utilizing read-only second-level cache, and having a transient field: class DomainObject { static def transients = ['userId'] Long id Long userId static mapping = { cache usage: 'read-only' table 'SOME_TABLE' } } I have a problem, references to DomainObject are being shared due to first-level caching, and thus transient fields are writing over each other. For example, def r1 = DomainObject.get(1) r1.userId = 22 def r2 = DomainObject.get(1) r2.userId = 34 assert r1.userId == 34 That is, r1 and r2 are references to the same instance. This is undesirable, I would like to cache the table data without sharing references. Any ideas? [Edit] Understanding the situation better now, I believe my question boils down to the following: Is there anyway to disable first level cache for a specific domain class while still using second level cache?

    Read the article

  • Three level heirarchal data-linq

    - by user326010
    Hi I have three level heirarchal data. using the statement below i managed to display two level data. I need to extend it to one more level. Current heirachy is Modules--Documents I need to extend it as Packages--Modules--Documents var data = (from m in DataContext.SysModules join d in DataContext.SysDocuments on m.ModuleID equals d.ModuleID into tempDocs from SysDocument in tempDocs.DefaultIfEmpty() group SysDocument by m).ToList(); Regards Tassadaque

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • Why does it matter that in Javascript, scope is function-level, not block-level?

    - by Jian Lin
    In the question http://stackoverflow.com/questions/1451009/javascript-infamous-loop-problem the accepted answer from Christoph's says that JavaScript's scopes are function-level, not block-level What if Javascript's scopes are block-level, then would the Infamous Loop problem still occur? But will there be a different (or easier way) to fix it? Is it as opposed to other languages, where using a { would start a new scope?

    Read the article

  • unstable / variating sound level (volume) within same song / clip

    - by Bastien
    Hello, I have a pretty standard Acer desktop (I guess a predecessor of the X5900 model). when playing mp3 / radio / youtube clips, the sound level is changing WITHIN the clip, highly annoying. in the case of an mp3, the same mp3 will play at a constant sound level on my iPod, so clearly an issue with the computer. problem existed under vista, I upgraded to windows 7 and have the same issue. I guess it might be soundcard-related, so updated the drivers, no luck neither. any idea ? thanks

    Read the article

  • SQl Server: serializable level not working

    - by Zé Carlos
    I have the following SP: CREATE PROCEDURE [dbo].[sp_LockReader] AS BEGIN SET NOCOUNT ON; begin try set transaction isolation level serializable begin tran select * from teste commit tran end try begin catch rollback tran set transaction isolation level READ COMMITTED end catch set transaction isolation level READ COMMITTED END The table "test" has many values, so "select * from teste" takes several seconds. I run the sp_LockReader at same time in two diferent query windows and the second one starts showing test table contents without the first one terminates. Shouldn't serializeble level forces the second query to wait? How do i get the described behaviour? Thanks

    Read the article

  • Proper two-level iPad UITableView

    - by Knodel
    I have an iPad app with split view. In the root view I need to make a two-level UITableView, so the UIWebView in the DetailView shows corresponding content. What I need is to make a two-level UITableView without editing, moving etc, so it can send the name of the selected row in the second level of the table to the DetailViewController. What is the best way to do it? Thanks in advance!

    Read the article

  • C++ Thread level constants

    - by Gokul
    Is there a way by which we can simulate thread level constants in C++? For example, if i have to make a call to template functions, then i need to mention the constants as template level parameters? I can use static const variables for template metaprogramming, but they are process level constants. I know, i am asking a question with a high probability of 'No'. Just thought of asking this to capitalize on the very rare probability :)) Thanks, Gokul.

    Read the article

  • Could a DNSSEC at level n manipulate a zone at level n+2?

    - by jroith
    With some people wondering how DNSSEC could affect global censorship, I'd like to know if DNSSEC could protect a zone from being partially modified by a grandparent zone. (The point of this question is not to suggest that ICANN or it's members are not be trusted, but to figure out how DNSSEC affects the power they could exercise in theory.) For example: ICANN owns the root zone The .de zone is delegated to DENIC, Germany. Assume example.de is delegated to some 3rd party. Now, assuming DENIC does not remove example.de from their zone, would it be possible for them to redirect the subdomain abc.example.de elsewhere by returning signed records from the .de servers for abc.example.de ? Similarly, would it be possible for the DNS root to easily return signed fake records of a third-level domain xy.z while the second-level zone z is not participating in this and is not affected otherwise?

    Read the article

  • How to set logging level for JDBCDriverLogging

    - by Scott
    I am trying to change the logging level to stop showing millions of this: <May 26, 2010 10:26:02 AM EDT> <Debug> <JDBCDriverLogging> <000000> <2336: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 | |> I have tried adding this to my java line: -Djava.util.logging.config.file=/foo/bar/logging.properties With this as my logging.properties file: handlers = java.util.logging.ConsoleHandler .level = OFF java.util.logging.ConsoleHandler.level = INFO No luck. I have tried this: Logger logger = Logger.getLogger("com.microsoft.sqlserver.jdbc"); Handler handler = new ConsoleHandler(); handler.setLevel(Level.INFO); logger.addHandler(handler); logger.setLevel(Level.INFO); logger.setUseParentHandlers(false); No luck. I have searched around and all ideas center around one of these two options, so I must be doing something else wrong I am using jtds-1.2.2.jar. Thanks for any suggestions.

    Read the article

  • MySQL Cluster transaction isolation level - READ_COMMITTED

    - by Doori Bar
    I'm learning by mostly reading the documentation. Unfortunately, http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html#isolevel_read-committed doesn't say anything, while it says everything. Confused? Me too. ndb engine supports only "READ_COMMITTED" transaction isolation level. A. It starts by saying "sets and reads its own fresh snapshot", which I translate to: The transaction is having a separated 'zone' which whatever it stores there - is what it reads back. B. While out-side of the transaction, the old-values are unlocked. C. It continues with: "for locking reads" sentence - No idea what it means. Question: they claim only READ_COMMITTED transaction isolation level is supported, but while handling a BLOB or a TEXT, they say the isolation is now "locked for reading" too. So is it a contradiction? can a transaction LOCK for reading just as well while handling something other than BLOB/TEXT? (such as integers)

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >