Search Results

Search found 59272 results on 2371 pages for 'time stamp'.

Page 218/2371 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • Upcoming EBS Webcasts for June, July, August 2012

    - by user793553
    See the following upcoming webcasts for June, July and August 2012. Flag Doc ID 740966.1 as a favourite, to keep up to date with latest advisor schedule. Additionally, see Doc ID 740964.1 for access to all archived advisor webcasts Oracle E-Business Suite Oracle E-Business Suite Title Date Summary None at this time.     EBS Agile Title Date Summary None at this time.     EBS Applications Technologies Group (ATG) Title Date Summary EBS – OAM Tuning and Monitoring EMEA July 10, 2012 Abstract EBS – OAM Tuning and Monitoring US July 11, 2012 Abstract Workflow Analyzer Followup EMEA July 24, 2012 Abstract Workflow Analyzer Followup US July 25, 2012 Abstract EBS CRM & Industries Title Date Summary None at this time.     EBS Financials Title Date Summary EBS Fixed Assets: Achieve Success Using Proactive Tools For Fixed Assets Support July 10, 2012 Abstract Overview and Flow of Oracle Project Resource Management July 17, 2012 Abstract Leveraging My Oracle Support To Increase Knowledge July 30, 2012 Abstract EBS HCM (HRMS) Title Date Summary Oracle Time and Labor (OTL) Rollback Functionality Session 1 July 25, 2012 Abstract Oracle Time and Labor (OTL) Rollback Functionality Session 2 July 25, 2012 Abstract EBS Manufacturing Title Date Summary Using Personalization in Oracle eAM June 21, 2012 Abstract OM Guided Resolutions - Finding Known Resolutions Easily July 17, 2012 Abstract Material Move Orders Flow July 25, 2012 Abstract Diagnosing Signal 11 Issues In ASCP Planning August 9, 2012 Abstract Interface Trip Stop - Best Practices and Debugging August 21, 2012 Abstract EBS Procurement Title Date Summary Punchout in iProcurement June 26, 2012 Abstract

    Read the article

  • An Oracle's Interns Story by Samarth Varshney

    - by user769227
    I have written a short write-up about my experience at Oracle and am attaching some pics along:  I joined Oracle on 5th January 2011 as part of my internship program in BITS Pilani Goa Campus. In the short period of six months, I had the most beautiful and interesting time of my life. It was fun to work in Oracle, thanks to the whole team. I had an excellent manager, simple and sophisticated, who gave me the utmost enthusiasm to work. I gained a lot of knowledge during my internship, thanks to my colleagues. They were very helpful and motivated us (interns) in every possible way. In the initial stages of work, in which you know almost nothing, they helped me gain knowledge at a rapid speed. Thanks to the vast database of study material at the Oracle site, that I could start on with my project in a very short time.  For me, the time flew like anything and made the 6 months look like a few days. It was probably due to the team, that the work was so much fun. We had our deadlines but had full freedom as to how to work and when to work. I don't remember a single instance, in which I was working and not listening to songs. I mean it will always be a time to remember. I hope to join this company and make this time last forever.  Samarth 

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • Why do we keep using CSV?

    - by Stephen
    Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV, both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?

    Read the article

  • Problems building clamcour

    - by user10778
    Hello People, when i try to install clamcour from terminal, it gives me this error, somebody can help me? calmcourdir# ./configure checking for libraries containing socket functions... -lc checking for socket... yes checking for bind... yes checking for listen... yes checking for accept... yes checking for shutdown... yes checking for socklen_t... yes checking for struct sockaddr_un.sun_len... no System log functions checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking for openlog... yes checking for syslog... yes checking for closelog... yes Time functions checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for localtime_r... yes checking for strftime... yes checking for unistd.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for sysconf... yes BZip2 support checking bzlib.h usability... no checking bzlib.h presence... no checking for bzlib.h... no checking for BZ2_bzWriteOpen in -lbz2... no GZip support checking zlib.h usability... no checking zlib.h presence... no checking for zlib.h... no checking for gzopen in -lz... no LibClamAV support checking for /usr/bin/clamav-config... no checking for /usr/local/bin/courier-config... no checking for /usr/clamav/bin/clamav-config... no checking for /usr/local/clamav/bin/clamav-config... no ./configure: line 25234: : command not found configure: error: Cannot find clamav-config

    Read the article

  • Unity3D - Android pause screen - double click issue

    - by user3666251
    I made a pause script for the game im developing for android. I added the script to the GUITexture I created and placed on the top right side of the screen.The issue stands at the part where if the player clicks the pause button then clicks resume then he wants to pause the game again.When he clicks pause the second time the buttons dont show up unless he clicks again. This is the script : #pragma strict var paused = false; var isButtonVisible : boolean = true; function OnMouseDown(){ this.paused = !this.paused; Time.timeScale = 0; isButtonVisible = true; } function OnGUI(){ if ( isButtonVisible ) { if(this.paused){ if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+3,200,50),"Restart")){ Application.LoadLevel(Application.loadedLevel); Time.timeScale = 1; isButtonVisible = false; } if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2-50,200,50),"Resume")){ Time.timeScale = 1; isButtonVisible = false; } // Insert the rest of the pause menu logic if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+56,200,50),"Main Menu")){ Application.LoadLevel ("MainMenu"); isButtonVisible = false; Time.timeScale = 1; } } } } Thank you.

    Read the article

  • PCLinuxOS 2010 (KDE) Review

    <b>Desktop Linux Reviews:</b> "The last time I looked at PCLinuxOS was back in 2009 when I was working full-time for ExtremeTech. There's a new release out and it's a good time for a review of it here on DLR."

    Read the article

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • How do I maintain a really poorly written code base?

    - by onlineapplab.com
    Recently I got hired to work on existing web application because of NDA I'm not at liberty to disclose any details but this application is working online in sort of a beta testing stage before official launch. We have a few hundred users right now but this number is supposed to significantly increase after official launch. The application is written in PHP (but it is irrelevant to my question) and is running on a dual xeon processor standalone server with severe performance problems. I have seen a lot of bad PHP code but this really sets new standards, especially knowing how much time and money was invested in developing it. it is as badly coded as possible there is PHP, HTML, SQL mixed together and code is repeated whenever it is necessary (especially SQL queries). there are not any functions used, not mentioning any OOP there are four versions of the app (desktop, iPhone, Android + other mobile) each version has pretty much the same functionality but was created by copying the whole code base, so now there are some differences between each version and it is really hard to maintain the database is really badly designed, which is causing severe performance problems also for fixing some errors in PHP code there is a lot of database triggers used which are updating data on SELECT and on INSERT so any testing is a nightmare Basically, any sin of a bad programming you can imagine is there for example it is not only possible to use SQL injections in literally every place but you can log into app if you use a login which doesn't exist and an empty password. The team which created this app is not working on it any more and there is an outsourced team which suggested that there are some problems but was never willing to deal with the elephant in the room partially because they've got a very comfortable contract and partially due to lack of skills (just my opinion). My job was supposed to be fixing some performance problems and extending existing functionality but first thing I was asked to do was a review of the existing code base. I've made my review and it was quite a shock for the management but my conclusions were after some time finally confirmed by other programmers. Management made it clear that it is not possible to start rewriting this app from scratch (which in my opinion should be done). We have to maintain its operable state and at the same time fix performance errors and extend the functionality. My question is, as I don't want just to patch the existing code, how to transform this into properly written app while keeping the existing code working at the same time? My plan is: Unify four existing versions into common code base (fixing only most obvious errors). Redesign db and use triggers to populate it with data (so data will be maintained in two formats at the same time) All new functionality will be written as separate project. Step by step transfer existing functionality into the new project After some time everything will be in the new project Some explanation about #2, right now it is practically impossible to make any updates in existing db any change requires reviewing whole code and making changes in many places. Is such plan feasible at all? Another solution is to walk away and leave the headache to someone else.

    Read the article

  • excel 2010 search function?

    - by Tom
    can a cell A1:A200 be searched for a "name" then once found, imput the cell location into a formula? such as find "tom"(a1:a200), [found location at cell a22] IF(a22),=IF(MINUTE(Auto_Agent!G27)+(SECOND(Auto_Agent!G27))=0,"",(MINUTE(Auto_Agent!G27)*60+(SECOND(Auto_Agent!G27)))) the problem I'm having is each time I import data names can be in different cell locations depending on who is working that day. example: Agent: Tom 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Agent: Bill 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05

    Read the article

  • How can I improve my Animation

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. Here is an example of a very basic jump I implemented. if(jumping) { height = sin(time); if(height < 0) jumping = false; // player landed player.position.z = height; } if(keydown(SPACE) && !jumping) { jumping = true; time = now(); // store the starting time } So my player jumped in a perfect sine function. That seems quite natural, because he slows down when he reached the top position, and in the fall he speeds up again. But patching every animation out of sine and cosine is stretched to its limits soon. So can I improve my animation and provide a more abstract layer?

    Read the article

  • Microsoft Azure Storage Queues Part 1: Getting Started

    Microsoft Azure Queues are a ready-to-use service that loosely connects components or applications through the cloud. This article is the first part in a five-part series about Microsoft Azure Cloud Services by Roman Schacherl. "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • When to update jQuery?

    - by epaulk
    When you recommend updating jQuery/jQuery UI? Or in other words: what are the best practices for updating jQuery/jQuery UI? I’m working on a long project that will take at least one more year. In that time span, I’m sure that jQuery/jQuery UI will be updated many times. Do you recommend update my jQuery/jQuery UI files every time an update is released? Or is better to stick with a particular version until the end of the project? I’m afraid of “breaking” code changes, and every time an update is released, I have to test everything. That takes too much time. But on the other hand, if I didn’t update, I’m afraid of bugs that later will bite me in the rear. The project is an ASP.MVC and I use jQuery a lot. Any thoughts?

    Read the article

  • touch in guiTexture

    - by Mitananda
    Before time in my game start. i want there will show a guiTexture.. if i touch this guiTexture, that will hidden and the time game will start. I try to make a script but it doesn't work. I'm a bit confused by the Unity documentation. please help me i'm newbie in Unity. my code : void Start () { if (myGuiTexture.HitTest(Input.GetTouch(0).position)) { if (Input.GetTouch(0).phase == TouchPhase.Began) { Time.timeScale = 1; myGuiTexture.enable = false; } } } and i also create this source but its doesn't work too. void Start () { if (tutorial.HitTest(new Vector3(0,0,0))){ Time.timeScale = 1; } }

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

  • A conversation with world experts in Customer Experience Management in Rome, Italy - Wed, June 20, 2012

    - by nicolasbonnet
    It is my pleasure to share the registration link below for your chance to meet active members of the Oracle Real-Time Decisions Customer Advisory Board. Join us to hear how leading brands across the world have achieved tremendous return on investment through their Oracle Real-Time Decisions deployments and do not miss this unique opportunity to ask them specific questions directly during our customer roundtable. Please share this information with anyone interested in real-time decision management and cross-channel predictive process optimization http://www.oracle.com/goto/RealTimeDecisions Nicolas Bonnet / Senior Director Product Management / Oracle Business Intelligence

    Read the article

  • How to connect to the internet reliably

    - by user12689
    I have a DSL internet connection. For a long time the internet functioned fine in Ubuntu using the default Network Manager. Recently however, my ISP was bought by another ISP, and the service name had to be changed in the DSL settings. In Windows, it connects without any issue whatsoever. But in Ubuntu, the first time I connect I get a "Apache: Not Found" page, the next time, I get to the previous ISP's login page, it continues like this for some time until I get it working. I have to try connecting it again and again. When I asked the customer service, they said that because of the takeover, they have switched servers. But they said, it should work properly in Windows, and they were clueless about Ubuntu. Is there anything I can do to solve this issue?

    Read the article

  • Automatically keep your local git repos clean

    - by kerry
    Most developers using git are probably aware of a command ‘git gc’ that has to be run from time to time when you notice your git commands are running a little slow. This command cleans up your git repo and makes sure everything is nice and tidy. If you have not run this command lately, you will notice a huge performance increase in your git commands after running. It’s a bit annoying to have to run this command when you notice that your git performance is suffering. The command also takes a while if you have not run it recently. With this in mind, I decided to create a method to automatically run this command from time to time. So I decided to overload cd similar to how rvm does. All you have to do is paste the method in your .profile file and it will run the command every time you enter a directory with a git repo. You’ll notice a little pause when entering the directory, it’s not insufferable but if you would prefer, you can add an & to the end of the command to have it run in the background. I chose the pause over the pid output of the background command. Here it is in all it’s glory. View the code on Gist.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >