Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 73/527 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Reminder: Totally Awesome and Totally Free Training SQL Server Training

    - by KKline
    One of the things that I enjoy about working for Quest Software is that we give back copiously to the community. From activities and offerings like SQLServerPedia , to our free posters mailed anywhere in North America (and don't forget the free hi-res PDFs for the rest of the world ), Don't forget that free DVDs of our virtual conferences featuring me, along with Buck Woody ( blog | twitter ) and Brent Ozar ( blog | twitter ) will be mailed anywhere in North America free of charge, now available...(read more)

    Read the article

  • Writing low latency Java

    - by user997112
    Are there any Java-specific techniques (things which wouldnt apply to C++) for writing low latency code, in Java? I often see Java low latency roles and they ask for experience writing low latency Java- which sometimes seems a little bit of an oxymoron. The only think I could think of is experience with JNI, outsourcing I/O calls to native code. Also possibly using the disruptor pattern, but thats not an actual technology. Are there any Java specific tips for writing low latency code? I am aware there is a Real Time Java Spec, but I have been warned real-time is not the same as low latency....

    Read the article

  • How to deal with large open worlds?

    - by Mr. Beast
    In most games the whole world is small enough to fit into memory, however there are games where this is not the case, how is this archived, how can the game still run fluid even though the world is so big and maybe even dynamic? How does the world change in memory while the player moves? Examples for this include the TES games (Skyrim, Oblivion, Morrowind), MMORPGs (World of Warcraft), Diablo, Titan Quest, Dwarf Fortress, Far Cry.

    Read the article

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • Upgrading/Installing Demantra 7.3.1.1? Check this out!

    - by user702295
    Here is a summary for relase 7.3.1.1 install/upgrade/features Data Preservation Setting for General Levels  Deploying Demantra Application Server 10g  Important upgrade Information  Known upgrade issues  Mozilla Firefox Browser  Installer Issues  Reviewing / Simulating General Level Data Such as CTO Base Model Demand  Failure Rate Calculation  Demantra SSL Client Authentication and Java 6  CTO functionality does not work in release 7.3.1.1 after upgrading from 7.3.0 using the ‘Platform Upgrade Only’ option.  User Privileges and Export Worksheet to Excell  Cookie Attribute Causes Logging Issue in Worksheet  List of bugs fixed in 7.3.1.1 See the following for details. Demantra 7.3.1.1 Install / Upgrade Known Issues, Notes, Guidance, Defects, Workarounds (Doc ID 1370518.1) Related Documents For Demantra Version 7.3.1.1 And If Demantra Supports The Required Stacks (Doc ID 1367141.1)

    Read the article

  • Why does my system use so much cache?

    - by Dave M G
    Previously, on my desktop computer running Ubuntu 14.04, I had 4GB RAM, which I thought should be plenty. However, after being on for a while, my computer would seem to get slow. I have a system resource monitor app in my Gnome panel, which I assume represents the available RAM (?). It shows a dark green area as being "Memory", and a light green area as "Cache". The "Cache" would slowly grow until it filled the whole graph, and then programs would get slow to load, or it would take a while to switch programs. I could alleviate the problem somewhat with this command, but eventually the computer cache fills up again, so it's only a bandaid: sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches" So, I figured I'd get more RAM, so I replaced one 2GB stick with an 8GB stick, and now I have 10 GB ram. And my "cache" still slowly maxes out and my computer slows as a result. Also, sometimes the computer starts out with "cache" maxed when I first boot and log in. Not always though, I don't know if there's a pattern that determines why it happens. Why is Ubuntu using up so much cache? Is 10GB not enough for Ubuntu? Here's what my system monitor looks like in my Gnome panel. The middle square shows RAM usage. The light green area is the "cache": This is my memory and swap history, which doesn't seem to include any information about "cache". I realize at this point I'm not totally clear on the difference between "cache" and "swap":

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • sys.dm_exec_query_profiles – FAQ

    - by Michael Zilberstein
    As you probably know, this DMV is new in SQL Server 2014. It had been first announced in CTP1 but only in BOL . Now in CTP2 everyone can “play” with it. Since BOL is a little bit unclear (understatement detected), I’ve prepared this small FAQ as a result of discussion with Adam Machanic ( blog | twitter ) and Matan Yungman ( blog | twitter ). Q: What did you expect from sys.dm_exec_query_profiles? A: Expectations were very high – it promised, for the first time, ability to see _actual_ execution...(read more)

    Read the article

  • How can I strip down Ubuntu?

    - by Thomas Owens
    I'm trying to fix what I consider a bloated install of Ubuntu. When I install Ubuntu on a machine, I get things that I don't want - web browsers, office applications, media players, accessibility utilities, Ubuntu One, and so on. My goal is to create a way that I can have an install of Ubuntu that contains only the most minimal packages - the administrative tools and package manager, a GUI (my preference would be GNOME), a text editor, core drivers (video cards, network cards - wired and wireless, input devices), and anything else that I have to have to run a stable distribution. From there, I would like to pick and choose which packages I install to create my own customized system. After playing around with other distros like Arch and Slackware, like how they provide a barebones install by default. However, I get trapped in a "configuration hell" - right now, I tried moving away from Ubuntu and to Arch, but after spending 6 hours with it, I still don't have a usable system. It's half configured and I don't have any usable software packages to enable me to work. Is anything that can help me available? Either something like the OpenSUSE builder that lets you choose applications and packages for the CD, an advanced installation mode where I can choose the packages to install and which to ignore, or a guide on how to strip Ubuntu down to its bare bones? And I suppose a natural follow up to this is once I have a stripped down Ubuntu, will this affect updating at all? When Canonical releases the next version of Ubuntu, I don't want any bloatware reinstalled. And yes, most of the applications that come with Ubuntu, I simply don't use. Ever.

    Read the article

  • How to empty swap if there is free RAM?

    - by jfoucher
    When I open a RAM-intensive app (VirtualBox set at 2 Gb RAM), Some swap space is generally used, depending on what else I have open at the time. However, when I quit that last application, the 2 Gb of RAM are freed, but the same swap space use remains. How can I tell ubuntu to stop using that swap and to revert to using the RAM? Thank you Edit : Right now, about 2 hours after having closed VirtualBox, I have 1.6 Gb free RAM and still 770 Mb in swap.

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Read Committed isolation level, indexed views and locking behavior

    - by Michael Zilberstein
    From BOL, " Key-Range Locking " article: Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level . The serializable isolation level requires that any query executed during a transaction must obtain the same set of rows every time it is executed during the transaction. A key range lock protects this requirement by preventing other transactions from inserting new rows whose...(read more)

    Read the article

  • How many vertices are needed to draw reasonably good-looking terrain?

    - by bobbaluba
    I have some pretty expensive code in my terrain vertex shader, and I am trying to figure out if it will still be fast enough. I haven't yet developed a level-of-detail system for my terrain rendering, but I can easily benchmark my code by just drawing mock triangles. My problem is, how do I know how many vertices to test with? Are there for example rendering engines that will tell me how many terrain vertices are currently on-screen? Or maybe it is possible to create a formula that will give me an estimate based on screen resolution?

    Read the article

  • Improve the business logic

    - by Victor
    In my application,I have a feature like this: The user wants to add a new address to the database. Before adding the address, he needs to perform a search(using input parameters like country,city,street etc) and when the list comes up, he will manually check if the address he wants to add is present or not. If present, he will not add the address. Is there a way to make this process better. maybe somehow eliminate a step, avoid need for manual verification etc.

    Read the article

  • Can an expert examine my .NET MVC 4 application? [on hold]

    - by Till Death Developer
    Problem Definition: I need an expert to examine my application not for errors but have a look at how my implementation goes and tell me whether am doing a good job or am just creating a huge mess, and please me with suggestion on how i should improve my work? Points of Concern: Neat Solution(Can find the thing you are looking for easily). Low Redundancy. Efficiency (Load time, Speed, etc...) Data Access Implementation. Authentication System Implementation. Data Services Implementation. Note: Application is just a playground for testing new implementation approaches so it may seem meaningless because it is, however not the subject any way i just need to know if am doing things in a good way(Nothing is the right way but there is good and bad). Solution Link: http://www.mediafire.com/?8s70y44w16n1uyx

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • SQL Server 2014 – delayed transaction durability

    - by Michael Zilberstein
    As I’m downloading SQL Server 2014 CTP2 at this very moment, I’ve noticed new fascinating feature that hadn’t been announced in CTP1 : delayed transaction durability . It means that if your system is heavy on writes and on another hand you can tolerate data loss on some rare occasions – you can consider declaring transaction as DELAYED_DURABILITY = ON . In this case transaction would be committed when log is written to some buffer in memory – not to disk as usual. This way transactions can become...(read more)

    Read the article

  • Beware of SQL Server and PerfMon differences in disk latency calculation

    - by Michael Zilberstein
    Recently sp_blitz procedure on one of my OLTP servers returned alarming notification about high latency on one of the disks (more than 100ms per IO). Our chief storage guy didn’t understand what I was talking about – according to his measures, average latency is only about 15ms. In order to investigate the issue, I’ve recorded 2 snapshots of sys.dm_io_virtual_file_stats and calculated latency per read and write separately. Results appeared to be even more alarming: while for read average latency...(read more)

    Read the article

  • EPM Planning 11.1.2 - MassGridStatistics

    - by Keith Rosenthal
    A utility is available for Oracle Hyperion Planning that determines web form load times.  This utility, MassGridStatistics, opens all web forms within the Planning application.  After the forms are opened, an html page will appear showing the form options, suppression, number of row column and page members, and load times.  Any form having a load time longer than one second could potentially have scalability issues in a multi-user environment and should be considered for re-design.  Adding suppression (especially block suppression) and reducing the number of rows and columns are potential fixes that will reduce load times. The MassGridStatistics utility is located in a .7z file called MassGridStatistics.7z.  Extract the file using 7-Zip.  A readme file is provided listing the installation instructions and the steps to run the utility. MassGridStatistics is included with the 11.1.2.1.101 patch set and will also be in all future releases starting with 11.1.2.2.  For earlier Planning releases, an SR will be necessary to have Support provide the utility.

    Read the article

  • Timing Calculations for Opengl ES 2.0 draw calls

    - by Arun AC
    I am drawing a cube in OpenGL ES 2.0 in Linux. I am calculating the time taken for each frame using below function #define NANO 1000000000 #define NANO_TO_MICRO(x) ((x)/1000) uint64_t getTick() { struct timespec stCT; clock_gettime(CLOCK_MONOTONIC, &stCT); uint64_t iCurrTimeNano = (1000000000 * stCT.tv_sec + stCT.tv_nsec); // in Nano Secs uint64_t iCurrTimeMicro = NANO_TO_MICRO(iCurrTimeNano); // in Micro Secs return iCurrTimeMicro; } I am running my code for 100 frames with simple x-axis rotation. I am getting around 200 to 220 microsecs per frame. that means am i getting around (1/220microsec = 4545) FPS Is my GPU is that fast? I strongly doubt this result. what went wrong in the code? Regards, Arun AC

    Read the article

  • Installed Ubuntu In a low Specs PC and it is too slow, even with LXDE

    - by Herudae
    I'm new with Linux and started with Ubuntu 11.10, I installed it in my PC (Core2Duo 2Ghz, 512Mb Ram DDR2, integrated video in Motherboard), I know the requirements for Unity are 1Gb Ram so I decided to download a Desktop environment more lightweight, so I Installed LXDE, it loads very fast, compared to the 3.5 min from login screen to open desktop in Unity, but it freezes every time I open a single program, I can't even navigate in Internet, it freezes, sometimes for a pair of minutes and the graph at bottom right is all green as if iyt were using 100% CPU, it happens with every program. As additional data it takes 3+ min to get from boot system selection screen to Login screen and 3.5 Min more to get into Ubuntu with Unity, with LXDE it turns to 30 secs aproximately. Is Ubuntu + LXDE Desktop Environment Package = Lubuntu? or should I download Lubuntu directly instead? I installed some other desktop environments, as Gnome but it doesn't log in, the screen just turns grey. Should I get an older Ubuntu version? I'm thinking about uninstalling Ubuntu but I'll try to deplete the options, thanks for your support.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >