Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 94/594 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • Upgrading/Installing Demantra 7.3.1.1? Check this out!

    - by user702295
    Here is a summary for relase 7.3.1.1 install/upgrade/features Data Preservation Setting for General Levels  Deploying Demantra Application Server 10g  Important upgrade Information  Known upgrade issues  Mozilla Firefox Browser  Installer Issues  Reviewing / Simulating General Level Data Such as CTO Base Model Demand  Failure Rate Calculation  Demantra SSL Client Authentication and Java 6  CTO functionality does not work in release 7.3.1.1 after upgrading from 7.3.0 using the ‘Platform Upgrade Only’ option.  User Privileges and Export Worksheet to Excell  Cookie Attribute Causes Logging Issue in Worksheet  List of bugs fixed in 7.3.1.1 See the following for details. Demantra 7.3.1.1 Install / Upgrade Known Issues, Notes, Guidance, Defects, Workarounds (Doc ID 1370518.1) Related Documents For Demantra Version 7.3.1.1 And If Demantra Supports The Required Stacks (Doc ID 1367141.1)

    Read the article

  • Why does my system use so much cache?

    - by Dave M G
    Previously, on my desktop computer running Ubuntu 14.04, I had 4GB RAM, which I thought should be plenty. However, after being on for a while, my computer would seem to get slow. I have a system resource monitor app in my Gnome panel, which I assume represents the available RAM (?). It shows a dark green area as being "Memory", and a light green area as "Cache". The "Cache" would slowly grow until it filled the whole graph, and then programs would get slow to load, or it would take a while to switch programs. I could alleviate the problem somewhat with this command, but eventually the computer cache fills up again, so it's only a bandaid: sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches" So, I figured I'd get more RAM, so I replaced one 2GB stick with an 8GB stick, and now I have 10 GB ram. And my "cache" still slowly maxes out and my computer slows as a result. Also, sometimes the computer starts out with "cache" maxed when I first boot and log in. Not always though, I don't know if there's a pattern that determines why it happens. Why is Ubuntu using up so much cache? Is 10GB not enough for Ubuntu? Here's what my system monitor looks like in my Gnome panel. The middle square shows RAM usage. The light green area is the "cache": This is my memory and swap history, which doesn't seem to include any information about "cache". I realize at this point I'm not totally clear on the difference between "cache" and "swap":

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • sys.dm_exec_query_profiles – FAQ

    - by Michael Zilberstein
    As you probably know, this DMV is new in SQL Server 2014. It had been first announced in CTP1 but only in BOL . Now in CTP2 everyone can “play” with it. Since BOL is a little bit unclear (understatement detected), I’ve prepared this small FAQ as a result of discussion with Adam Machanic ( blog | twitter ) and Matan Yungman ( blog | twitter ). Q: What did you expect from sys.dm_exec_query_profiles? A: Expectations were very high – it promised, for the first time, ability to see _actual_ execution...(read more)

    Read the article

  • How can I strip down Ubuntu?

    - by Thomas Owens
    I'm trying to fix what I consider a bloated install of Ubuntu. When I install Ubuntu on a machine, I get things that I don't want - web browsers, office applications, media players, accessibility utilities, Ubuntu One, and so on. My goal is to create a way that I can have an install of Ubuntu that contains only the most minimal packages - the administrative tools and package manager, a GUI (my preference would be GNOME), a text editor, core drivers (video cards, network cards - wired and wireless, input devices), and anything else that I have to have to run a stable distribution. From there, I would like to pick and choose which packages I install to create my own customized system. After playing around with other distros like Arch and Slackware, like how they provide a barebones install by default. However, I get trapped in a "configuration hell" - right now, I tried moving away from Ubuntu and to Arch, but after spending 6 hours with it, I still don't have a usable system. It's half configured and I don't have any usable software packages to enable me to work. Is anything that can help me available? Either something like the OpenSUSE builder that lets you choose applications and packages for the CD, an advanced installation mode where I can choose the packages to install and which to ignore, or a guide on how to strip Ubuntu down to its bare bones? And I suppose a natural follow up to this is once I have a stripped down Ubuntu, will this affect updating at all? When Canonical releases the next version of Ubuntu, I don't want any bloatware reinstalled. And yes, most of the applications that come with Ubuntu, I simply don't use. Ever.

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • How to empty swap if there is free RAM?

    - by jfoucher
    When I open a RAM-intensive app (VirtualBox set at 2 Gb RAM), Some swap space is generally used, depending on what else I have open at the time. However, when I quit that last application, the 2 Gb of RAM are freed, but the same swap space use remains. How can I tell ubuntu to stop using that swap and to revert to using the RAM? Thank you Edit : Right now, about 2 hours after having closed VirtualBox, I have 1.6 Gb free RAM and still 770 Mb in swap.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • How many vertices are needed to draw reasonably good-looking terrain?

    - by bobbaluba
    I have some pretty expensive code in my terrain vertex shader, and I am trying to figure out if it will still be fast enough. I haven't yet developed a level-of-detail system for my terrain rendering, but I can easily benchmark my code by just drawing mock triangles. My problem is, how do I know how many vertices to test with? Are there for example rendering engines that will tell me how many terrain vertices are currently on-screen? Or maybe it is possible to create a formula that will give me an estimate based on screen resolution?

    Read the article

  • Read Committed isolation level, indexed views and locking behavior

    - by Michael Zilberstein
    From BOL, " Key-Range Locking " article: Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level . The serializable isolation level requires that any query executed during a transaction must obtain the same set of rows every time it is executed during the transaction. A key range lock protects this requirement by preventing other transactions from inserting new rows whose...(read more)

    Read the article

  • Can an expert examine my .NET MVC 4 application? [on hold]

    - by Till Death Developer
    Problem Definition: I need an expert to examine my application not for errors but have a look at how my implementation goes and tell me whether am doing a good job or am just creating a huge mess, and please me with suggestion on how i should improve my work? Points of Concern: Neat Solution(Can find the thing you are looking for easily). Low Redundancy. Efficiency (Load time, Speed, etc...) Data Access Implementation. Authentication System Implementation. Data Services Implementation. Note: Application is just a playground for testing new implementation approaches so it may seem meaningless because it is, however not the subject any way i just need to know if am doing things in a good way(Nothing is the right way but there is good and bad). Solution Link: http://www.mediafire.com/?8s70y44w16n1uyx

    Read the article

  • Improve the business logic

    - by Victor
    In my application,I have a feature like this: The user wants to add a new address to the database. Before adding the address, he needs to perform a search(using input parameters like country,city,street etc) and when the list comes up, he will manually check if the address he wants to add is present or not. If present, he will not add the address. Is there a way to make this process better. maybe somehow eliminate a step, avoid need for manual verification etc.

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • Beware of SQL Server and PerfMon differences in disk latency calculation

    - by Michael Zilberstein
    Recently sp_blitz procedure on one of my OLTP servers returned alarming notification about high latency on one of the disks (more than 100ms per IO). Our chief storage guy didn’t understand what I was talking about – according to his measures, average latency is only about 15ms. In order to investigate the issue, I’ve recorded 2 snapshots of sys.dm_io_virtual_file_stats and calculated latency per read and write separately. Results appeared to be even more alarming: while for read average latency...(read more)

    Read the article

  • SQL Server 2014 – delayed transaction durability

    - by Michael Zilberstein
    As I’m downloading SQL Server 2014 CTP2 at this very moment, I’ve noticed new fascinating feature that hadn’t been announced in CTP1 : delayed transaction durability . It means that if your system is heavy on writes and on another hand you can tolerate data loss on some rare occasions – you can consider declaring transaction as DELAYED_DURABILITY = ON . In this case transaction would be committed when log is written to some buffer in memory – not to disk as usual. This way transactions can become...(read more)

    Read the article

  • EPM Planning 11.1.2 - MassGridStatistics

    - by Keith Rosenthal
    A utility is available for Oracle Hyperion Planning that determines web form load times.  This utility, MassGridStatistics, opens all web forms within the Planning application.  After the forms are opened, an html page will appear showing the form options, suppression, number of row column and page members, and load times.  Any form having a load time longer than one second could potentially have scalability issues in a multi-user environment and should be considered for re-design.  Adding suppression (especially block suppression) and reducing the number of rows and columns are potential fixes that will reduce load times. The MassGridStatistics utility is located in a .7z file called MassGridStatistics.7z.  Extract the file using 7-Zip.  A readme file is provided listing the installation instructions and the steps to run the utility. MassGridStatistics is included with the 11.1.2.1.101 patch set and will also be in all future releases starting with 11.1.2.2.  For earlier Planning releases, an SR will be necessary to have Support provide the utility.

    Read the article

  • Timing Calculations for Opengl ES 2.0 draw calls

    - by Arun AC
    I am drawing a cube in OpenGL ES 2.0 in Linux. I am calculating the time taken for each frame using below function #define NANO 1000000000 #define NANO_TO_MICRO(x) ((x)/1000) uint64_t getTick() { struct timespec stCT; clock_gettime(CLOCK_MONOTONIC, &stCT); uint64_t iCurrTimeNano = (1000000000 * stCT.tv_sec + stCT.tv_nsec); // in Nano Secs uint64_t iCurrTimeMicro = NANO_TO_MICRO(iCurrTimeNano); // in Micro Secs return iCurrTimeMicro; } I am running my code for 100 frames with simple x-axis rotation. I am getting around 200 to 220 microsecs per frame. that means am i getting around (1/220microsec = 4545) FPS Is my GPU is that fast? I strongly doubt this result. what went wrong in the code? Regards, Arun AC

    Read the article

  • When to start thinking about scalability?

    - by Rits
    I'm having a funny but also terrible problem. I'm about to launch a new (iPhone) app. It's a turn-based multiplayer game running on my own custom backend. But I'm afraid to launch. For some reason, I think it might become something big and that its popularity will kill my poor lonely single server + MySQL database. On one hand I'm thinking that if it's growing, I'd better be prepared and have a scalable infrastructure already in place. On the other hand I just feel like getting it out into the world and see what happens. I often read stuff like "premature optimization is the root of all evil" or people saying that you should just build your killer game now, with the tools at hand, and worry about other stuff like scalability later. I'd love to hear some opinions on this from experts or people with experience with this. Thanks!

    Read the article

  • Installed Ubuntu In a low Specs PC and it is too slow, even with LXDE

    - by Herudae
    I'm new with Linux and started with Ubuntu 11.10, I installed it in my PC (Core2Duo 2Ghz, 512Mb Ram DDR2, integrated video in Motherboard), I know the requirements for Unity are 1Gb Ram so I decided to download a Desktop environment more lightweight, so I Installed LXDE, it loads very fast, compared to the 3.5 min from login screen to open desktop in Unity, but it freezes every time I open a single program, I can't even navigate in Internet, it freezes, sometimes for a pair of minutes and the graph at bottom right is all green as if iyt were using 100% CPU, it happens with every program. As additional data it takes 3+ min to get from boot system selection screen to Login screen and 3.5 Min more to get into Ubuntu with Unity, with LXDE it turns to 30 secs aproximately. Is Ubuntu + LXDE Desktop Environment Package = Lubuntu? or should I download Lubuntu directly instead? I installed some other desktop environments, as Gnome but it doesn't log in, the screen just turns grey. Should I get an older Ubuntu version? I'm thinking about uninstalling Ubuntu but I'll try to deplete the options, thanks for your support.

    Read the article

  • High Usage of RAM by wxPython's GUI and need some advice to reduce it

    - by user67024
    I've recently developed a GUI in wxPython for windows platform. It contains a five tabs, 4 of them are just richTextCtrl boxes and the other one has controls for uploading files, buttons, textctrls, a slider etc.. As I was new to GUI development in Python, I used wxFormBuilder to generate some of the code using a good amount of sizers. So, now the problem is that the GUI starts off with a initial memory of around 40MB which is too much for such a simple application (Or so I think) . Also, when the functions handling the functions use huge lists as the program is for debugging large data logs and identifying the problems in'em implying that I can't afford memory for GUI. So, how can I reduce that start working memory size? Is it a general issue in wxPython? And currently trying use profilers but not sure if it's going to help.

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • Java Alphabetize Algorithm Insertion sort vs Bubble Sort

    - by Chris Okyen
    I am supposed to "Develop a program that alphabetizes three strings. The program should allow the user to enter the three strings, and then display the strings in alphabetical order." It's instructed that I need to use the String library compareTo()/charAt()/toLowerCase() to make all the characters lowercase so the Lexicon comparison is also a alphabetical comparison. Input Pseudo Code: String input[3]; Scanner keyboard = new Scanner(System.in); System.out.println("Enter three strings: "); for(byte i = 0; i < 3; i++) input[i] = keyboard.next() The sorting would be Insertion Sort: 321 2 3 1 2 31 231 1 23 1 2 3 1 23 1 23 123 Bubble Sort 321 231 213 123 Which would be more efficient in this case? The bubble sort seems to be more efficient though they seem to have equal stats for worst best and avg case, but I read the Insertion Sort is quicker for small amounts of data like my case.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >