Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 83/763 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Make Google Chrome "Application" Windows Use High-Quality Icons in Windows 7

    - by The How-To Geek
    Whenever I use Google Chrome's "Create Application" shortcut (which I heavily use, and recommend), the icons shown on the Windows 7 taskbar are really blurry, probably the result of the 16x16 favicon being stretched out. I'd like to be able to replace these with another, high-quality icon, but even when I replace the cached file, it doesn't update for some reason. For reference, here's the path to the icon, for Gmail, at least. I'm also using the latest Dev channel version of Chrome. %USERPROFILE%\AppData\Local\Google\Chrome\USERDA~1\Default\PLUGIN~1\GOOGLE~1\mail.google.com\https_443\icons#desktop\

    Read the article

  • Sound card issues (High Definition Audio Controller)

    - by Prakash R
    I have a 5-year-old Acer Aspire 4520. Until a month back it was working beautifully on Windows 7 32bit. But then out of the blue, the sound stopped working. I've tried reinstalling the OS 3-4 times. The sound came back a couple of times, but it stopped working after a reboot. Even after installing the sound drivers, I don't see any entry in the Playback tab of the Sound applet in Control Panel. I see a High Definition Audio Controller entry in Device Manager. I disabled and uninstalled it, but Windows reinstalls it automatically. I'll share specific hardware details if anybody here needs to know. The processor is "AMD Athlon(tm) 64 X2 Dual-Core Processor TK-55". Any help would be appreciated.

    Read the article

  • When to use MySQL replication or DRBD for HA on Xen VM?

    - by user62513
    I'm setting up a database which needs to be needs to provide High Availabilty. My primary concern is high performance and robustness (I don't want something that will fail fast and badly). The database is accessed by the application at an average of 300 qps. It's will run on Xen VMs and it has some InnoDB tables as well as MyISAM tables. The VMs are connected via ethernet 100Mbit/s ethernet cables. Which of the two - MySQL replication or DRBD - would you recommend in such a situation? Or should I use DRBD to make the master database Highly Available and use MySQL replication on the slaves? I'm a developer so these things are all not so easy for me to make a sound judgement.

    Read the article

  • Load on Ubuntu 8.04 LTS high

    - by Paddington
    My Ubuntu 8.04 LTS server periodically has a high load avg spike(once every 2 days) resulting in Apache timing out and virtualy everything even SSH to the server is not possible. When I am on the console and run TOP is see that The load avg increases from less than 1 to above 60 in 15 mins. How can I isolate the cause? top - 09:21:51 up 37 days, 20:18, 6 users, load average: 5.41, 5.53, 5.36 Tasks: 160 total, 2 running, 156 sleeping, 0 stopped, 2 zombie Cpu(s): 65.0%us, 8.8%sy, 0.0%ni, 1.0%id,24.6%wa, 0.3%hi, 0.3%si, 0.0%st Mem: 3989468k total, 3444984k used, 544484k free, 360460k buffers Swap: 11687248k total, 178168k used, 11509080k free, 881772k cached

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Load Balancing and High Availability for Web Site

    - by nzgirl
    We've developing a database driven (70%/30% read/write load) website using C#.NET, IIS and MS SQL Server 2008 to be hosted on Windows 2008. Due to contractual reasons our setup has to be hosted on our own physical/virtual servers instead of a cloud solution at this stage. Could someone outline or link to some best practices that would provide both high availability (priority at the moment) and eventually load balancing for our site. We're probably looking at some sort of 2 SQL server mirrored system and 2 ISS web servers to start with. Thanks in advance.

    Read the article

  • Perfmon quick rundown

    - by anon
    I've known quite along while about performance monitor on Windows. I have decided now to create a scheduled performance monitoring of my entire system so I can find bottlenecks for future improvements. So as you can imagine this is going to run 24/7 so I can identify peak utilization. With performance monitor on Windows 7 for example where are the logs stored (c:\perfmon)? Is there a log size? Even better is the a website that can get me up to speed with scheduling and best practices of perfmon? (I don't need an explanation of what I can monitor)

    Read the article

  • High system cpu load (%sys), system locks

    - by Mark
    For the last two weeks we are having intermittent severe spikes in system cpu usage (shown as %sys), which last for maybe half a minute, locking most processes, including ssh. I've been trying to figure this out, but atop doesn't show anything relevant (system usage for processes it shows is insignificant), spikes are intermittent and I could not reproduce the spike using any workload for the web application this webserver hosts. If you have any ideas on how to debug high %sys and (sometimes) %si CPU usage, please share them. System specs (don't know if any of this is relevant): Dedicated server, CentOS 6, core i7 950, consistent 4 to 8 GB RAM free at any time, hard drives are in RAID-1. Additional info: dmesg output doesn't change between spikes /var/log/messages doesn't change between spikes Here is cat /proc/vmstat Here is output of mpstat 1 during a typical spike Add 07.11.11: looks like simple reboot restored system state, and we might never know what caused the disturbance in first place.

    Read the article

  • HIGH CPU USAGE FROM SUSTEM [on hold]

    - by user195641
    CAN ANYONE EXPLAIN THIS ? IT IS BAD FOR MY CPU ? CAN ANYONE HELP ME ? I DONT KNOW WHAT TO DO ! THERE IS HIGH DELAY WHEN I OPEN A NEW TASK HOWEVER ITS A INTEL CORE DUO EXTREAMA 3.0GH THANKS There are about 100 items monitored with the agent. They are also monitored on other identical hosts where Zabbix agent does not consume so much of CPU. Agents send collected data to Zabbix proxy. The agent configuration is default. The host CPU has 8 cores (2.4 Gz). The smallest time value for monitored items is 60 seconds.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

  • How significant is the bazaar performance factor?

    - by memodda
    I hear all this stuff about bazaar being slower than git. I haven't used too much distributed version control yet, but in Bazaar vs. Git on the bazaar site, they say that most complaints about performance aren't true anymore. Have you found this to be true? Is performance pretty much on par now? I've heard that speed can affect workflow (people are more likely to do good thing X if X is fast). What specific cases does performance currently affect workflow in bazaar vs other systems (especially git), and how? I'm just trying to get at why performance is of particular importance. Usually when I check something in or update it, I expect it to take a little while, but it doesn't matter. I commit/update when I have a second, so it doesn't interfere with my productivity. But then I haven't used DVCS yet, so maybe that has something to do with it?

    Read the article

  • Average performance of binary search algorithm?

    - by Passonate Learner
    http://en.wikipedia.org/wiki/Binary_search_algorithm#Average_performance BinarySearch(int A[], int value, int low, int high) { int mid; if (high < low) return -1; mid = (low + high) / 2; if (A[mid] > value) return BinarySearch(A, value, low, mid-1); else if (A[mid] < value) return BinarySearch(A, value, mid+1, high); else return mid; } If the integer I'm trying to find is always in the array, can anyone help me write a program that can calculate the average performance of binary search algorithm?

    Read the article

  • ASP.NET Membership for high security scenarios?

    - by Joachim Kerschbaumer
    Hi there, Is the asp.net membership system used over wcf (transport security turned on) enough for high security internet scenarios with thousands of clients spread all over the internet? I'm just evaluating possible solutions and wanted to know if this might fit in this category. If not, what would be the best method to provide high security access over wcf for internet scenarios?

    Read the article

  • How to improve Java performance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it does not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • High Frequency Trading

    - by Hamza Yerlikaya
    Over the last couple of weeks i have come across lots of articles about high frequency trading. They all talk about how important computers and software is to this but since they are all written from a financial point of view there is no detail about what does software do? Can anyone explain from a programmers point of view what is high frequency trading? and why is computer/software so important in this field?

    Read the article

  • How can serialisation/deserialisation improve performance?

    - by dotnetdev
    Hi, I heard an example at work of using serialization to serialise some values for a webpart (which come from class properties), as this improves performance (than I assume getting values/etting values from/to database). I know it is not possible to get an explanation of how performance in the scenario I speak of at work can be improved as there is not enough information, but is there any explanation of how serialization can generally improve performance? Thanks

    Read the article

  • Performance Overhead of Perf Event Subsystem in Linux Kernel

    - by Bo Xiao
    Performance counters for Linux are a new kernel-based subsystem that provide a framework for all things performance analysis. It covers hardware level (CPU/PMU, Performance Monitoring Unit) features and software features (software counters, tracepoints) as well. Since 2.6.33, the kernel provide 'perf_event_create_kernel_counter' kernel api for developers to create kernel counter to collect system runtime information. What I concern most is the performance impact on overall system when tracepoint/ftrace is enabled. There are no docs I can find about them. I was once told that ftrace was implemented by dynamically patching code, will it slow the system dramatically?

    Read the article

  • Java 1.4 Class performance on 1.5 JVM

    - by user222164
    Switching from JVM 1.4 to 1.5 has performance benefits as per release notes. http://java.sun.com/j2se/1.5.0/docs/relnotes/features.html#performance We have Java 1.4 compiled classes which are run on 1.5 JVM, will these classes suffer in performance because they were compiled using 1.4 ?

    Read the article

  • Linq-to-SQL and Performance.

    - by jalpesh
    HI, I am developing asp.net mvc site with linq-to-sql we are having 1000 cocurrent users and we are having performance problems. I have found that stackovewflow is also build on linq-to-sql? So can anybody know how they improved performance. Without line performance was good each page are loaded in 3 seconds but after migrating to linq as per our client requirement page comes in 8 to 10 seconds which is not acceptable performance. Our HTML is very clear but we are having very complex database operations. Any tip or code will be best answer. Thanks in advance,

    Read the article

  • Do we really need high level languages? [closed]

    - by i_love_c
    Seeing the amount of softwares developed (and still being developed) in C and considering the fact that C currently tops the TIOBE chart, I have this one question for you all: Do we really need high level languages like C# or F# or Ruby? Don't you think these so-called high level languages are actually spoiling programmers and resulting in suboptimal and non-efficient softwares?

    Read the article

  • ntpdate -d Server dropped Strata too high

    - by AndyM
    I cannot sync with a NTP source thats coming from an internal router/firewall. Anyone help ? ntppdate -d 192.168.92.82 6 Jun 11:57:30 ntpdate[5011]: ntpdate [email protected] Tue Feb 24 06:32:26 EST 2004 (1) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) 192.168.92.82: Server dropped: strata too high server 192.168.92.82, port 123 stratum 16, precision -19, leap 11, trust 000 refid [73.78.73.84], delay 0.02591, dispersion 0.00002 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 6:28:16.000 originate timestamp: d1972e03.0ae02645 Mon, Jun 6 2011 11:44:19.042 transmit timestamp: d197311b.0ffac1d2 Mon, Jun 6 2011 11:57:31.062 filter delay: 0.02609 0.02591 0.02594 0.02596 0.00000 0.00000 0.00000 0.00000 filter offset: -792.020 -792.020 -792.020 -792.020 0.000000 0.000000 0.000000 0.000000 delay 0.02591, dispersion 0.00002 offset -792.020152 6 Jun 11:57:31 ntpdate[5011]: no server suitable for synchronization found Edit The server I'm being asked to sync to is a firewall , and I've now been told that it is not syncing with anything. So I suppose I need to know if I can force my server to sync with a server that is stratum 16 i.e not sync'd. Is that possible ?

    Read the article

  • raid 1 and high load average

    - by melocoton
    i have a server with high load average, I think the problem is the raid 1. cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 256896 blocks [2/2] [UU] md3 : active raid1 sdb3[1] sda3[0] 2562240 blocks [2/2] [UU] md4 : active raid1 sdb5[1] sda5[0] 958566272 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 15366080 blocks [2/2] [UU] model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz Linux 2.6.18-164.6.1.el5.centos.plus (local) 04/19/2010 avg-cpu: %user %nice %system %iowait %steal %idle 17.37 0.01 6.02 26.17 0.00 50.43 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 61.09 562.65 893.73 1557214 2473546 sda1 0.01 0.27 0.02 751 42 sda2 6.11 195.50 169.78 541075 469888 sda3 0.01 0.23 0.00 641 0 sda4 0.00 0.01 0.00 18 0 sda5 54.96 366.54 723.94 1014449 2003616 sdb 54.40 433.22 893.73 1199015 2473546 sdb1 0.01 0.16 0.02 436 42 sdb2 5.31 169.00 169.78 467729 469888 sdb3 0.01 0.31 0.00 865 0 sdb4 0.00 0.00 0.00 10 0 sdb5 49.05 263.65 723.94 729695 2003616 md1 29.96 364.39 166.68 1008498 461312 md4 124.15 630.07 713.28 1743822 1974112 md3 0.05 0.43 0.00 1192 0 md0 0.04 0.32 0.00 872 10 dm-0 7.96 83.29 23.02 230530 63720 dm-1 3.67 51.81 2.73 143394 7560 dm-2 7.63 67.76 27.35 187546 75696 dm-3 8.20 134.60 14.02 372514 38792 dm-4 5.90 10.66 39.35 29498 108912 dm-5 17.39 24.52 121.79 67850 337080 dm-6 27.19 229.60 139.89 635442 387168 dm-7 0.14 1.07 0.28 2970 776 dm-8 25.84 4.23 202.89 11698 561536 dm-9 14.77 8.38 112.35 23202 310960 dm-10 5.29 12.78 29.55 35376 81784 dm-11 0.16 1.25 0.05 3450 128 the server runs lvm in the md4

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >