Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 53/783 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Improve backup performance by watching files added/modified in given directories?

    - by OverTheRainbow
    I use SyncbackSE on Windows to back up files between two hard drives daily. Every time the application starts, it scans every single file in the directories that it watches before copying files that were added or modified. To improve performance, I was wondering if there were a Windows backup application that would hook into Windows to keep tracks of files that were added/modified in given directories, so that it only needs to go through this list when it comes time for a backup. Thank you.

    Read the article

  • Reminder: Java EE 7 Job Task Analysis Survey – Participants Needed

    - by Brandye Barrington
    Java EE Developers/Practitioners, Recruiters, Managers Hiring Java EE Developers: Our Survey Continues.  We're looking to you to directly help shape the scope and definition of two new Java EE 7 Certification exams. We'll soon begin certifying front-end and/or server-side enterprise developers who use Java. We're therefore interested in those of you who:  are currently working with Java EE 7 technology or have plans to develop with Java EE 7 in the near future. have 2-4 years experience with the previous Java EE technology versions. are recruiting and/or hiring candidates to develop Java EE 7 applications. are technically savvy and able to articulate the skills and knowledge required to successfully staff Java Enterprise Edition front-end and server-side projects.

    Read the article

  • Install Visual Studio 2010 on SSD drive, or HDD for better performance?

    - by Steve
    I'm going to be installing Visual Studio 2010. I already have my source code on the SSD. For best performance, especially time to open the solution and compiling time, would it be better to install VS 2010 on the SSD or install it on the HDD. If both were on the SSD, loading the VS 2010 files would be quicker, but there would be contention between loading the source and the program files. Thanks!

    Read the article

  • "Inside Job"

    Embedded databases power back-end hardware, business applications, and portable devices everywhere. Find out how Oracle embedded databases live and work at the core of hardware, software, and other devices—and deliver cash, health, and security.

    Read the article

  • Database Mirroring Performance Monitoring

    Many people deploy performance monitoring solutions in a "one-size-fits-all" manner. That is, they tend to build a solution that can be easily deployed to multiple servers and capture basic information from each server. The trouble is that not every server is identical, not even within the same shop. For example, not every server may have database mirroring deployed, which means your performance monitoring solution may be missing some critical pieces of information with regards to monitoring database mirroring.

    Read the article

  • JOB OF THE WEEK

    - by Tim Koekkoek
    Placement in Contract and Business Practice Services department (50%) - Baden (Switzerland) This placement in the Contract and Business Practice Services department is challenging and diverse and you will support and contribute to the contract teams with the creation and technical archiving of the documents. All duties are in close coordination with the account management and several contracts team, so you will need to have great communication skills both in German and English, great organizational skills and the flexibility to deal with different stakeholders.  You will be working in a very international organization and get the possibility to work out your own ideas and develop your skills and your career in one of the biggest Technology companies in the world! If you are interested in this position, read more here!For all of our other vacancies and internships, please visit https://campus.oracle.com.

    Read the article

  • Binding Data to Web Performance Tests

    Web Performance Tests provide a simple means of ensuring correct and performant responses are being returned from your web application. Testing a wide variety of inputs can be tedious without a way to separate test recording and input selection. Data binding provides a convenient and simple way to try an unlimited number of different inputs as part of your web performance tests using Visual Studio 2010.

    Read the article

  • JOB OF THE WEEK

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} My name is Pascaline and I am the EMEA Solution Response Manager. I currently have a role open for a Benelux Solution Response Representative to jump-start his/her career in my international team of six people from all across Europe. Key for this exciting role is that you are curious to learn, like networking and constantly want to develop yourself. To help you with that, you will get extensive product trainings and workshops on all Oracle product lines and you will receive sales training. Further, you have the opportunity to get certified on Oracle products through online trainings and workshops. Every month you will also benefit from 1-on-1 sales coaching and regular coaching from me to help you grow and develop your career at Oracle! The role will include the follow-up of marketing events and online marketing activities with current Key Accounts in the Benelux. It is truly a pioneering role at Oracle as it is the first time that an employee will engage in business conversations about all lines of businesses and products ranges with Key Accounts. So are you interested to work in between marketing and sales? Do you want to work for a big IT multinational? Do you want to work abroad after you graduate and do you want to develop yourself? Then please visit this link for more information.

    Read the article

  • Question about initial interview for job [closed]

    - by JustLikeThat
    So I feel kind of stupid having to ask this but tomorrow I have a phone interview with a good company. Phone interviews themselves not a big deal for me, but having to tell them my salary expectations is. The position that I'm applying for is a mid-level software engineer, I fit all of the requirements (I'm not overly qualified by any means), and I want to be sure I'm not asking for an amount that would be absurd or too little. Now, assuming I get a second interview and have to complete some sort of puzzle/code/work, they may pay me +/- whatever I asked for based upon their evaluation of my work. What I'd like to know, is what is a good amount to ask for? Or am I completely wrong with my assumptions? Either way some advice would be much appreciated!

    Read the article

  • What do you consider standard job perks? [closed]

    - by reseter
    What does a company need to offer you (apart from a fat pay cheque) for you to work for them? I am aware of this question, which is from an employer's perspective. I am interested in your views as employees. To get the discussion started, here is a list off the top of my head (not in any particular order): High-end computer with a huge screen or two. The best software tool money can buy as per Joel's test). That isn't too much to ask given many of the best tools are free (think git). Flexibility is a bonus- if a particular platform/ piece of software is not absolutely required, I would like to pick my OS and IDE. A quality chair Quiet workspace. Open plan is fine as long as there are meeting rooms so that there is no constant chatter going on around me. Spacious workspace. I would rather have more than three inches between my mouse and the person next to me's keyboard. Food and drink at work. Many companies these days have fruit baskets, biscuits, etc available to their employees, some even offer free lunch. Education. If my employer wants my skills to stay up-to-date, they should at the very least understand I need time to learn. If they want to pay for my books and conference registration fees, I am more than happy to accept. Other options include organizing internal knowledge exchange days or inviting speakers from outside. Flexible hours/ option to work from home is a bonus

    Read the article

  • Relevance of Performance Based SEO Services

    It was an interesting conversation between a Search Engine Optimization (SEO) business development person and a prospective customer. The prospective customer had scoffed at the SEO person's sales pitch about them offering "Performance Based SEO Services", saying Performance Based Services of any kind in life is not a luxury but a fundamental requirement.

    Read the article

  • Granularity and Parallel Performance

    One key to attaining good parallel performance is choosing the right granularity for the application. The goal is to determine the right granularity (usually larger is better) for parallel tasks, while avoiding load imbalance and communication overhead to achieve the best performance.

    Read the article

  • How to effectively execute this cron job?

    - by Lost_in_code
    I have a table with 200 rows. I'm running a cron job every 10 minutes to perform some kind of insert/update operation on the table. The operation needs to be performed only on 5 rows at a time every time the cron job runs. So in first 10 mins records 1-5 are updated, records 5-10 in the 20th minute and so on. When the cron job runs for the 20th time, all the records in the table would have been updated exactly once. This is what is to be achieved at least. And the next cron job should repeat the process again. The problem: is that, every time a cron job runs, the insert/update operation should be performed on N rows (not just 5 rows). So, if N is 100, all records would've been updated by just 2 cron jobs. And the next cron job would repeat the process again. Here's an example: This is the table I currently have (200 records). Every time a cron job executes, it needs to pick N records (which I set as a variable in PHP) and update the time_md5 field with the current time's MD5 value. +---------+-------------------------------------+ | id | time_md5 | +---------+-------------------------------------+ | 10 | 971324428e62dd6832a2778582559977 | | 72 | 1bd58291594543a8cc239d99843a846c | | 3 | 9300278bc5f114a290f6ed917ee93736 | | 40 | 915bf1c5a1f13404add6612ec452e644 | | 599 | 799671e31d5350ff405c8016a38c74eb | | 56 | 56302bb119f1d03db3c9093caf98c735 | | 798 | 47889aa559636b5512436776afd6ba56 | | 8 | 85fdc72d3b51f0b8b356eceac710df14 | | .. | ....... | | .. | ....... | | .. | ....... | | .. | ....... | | 340 | 9217eab5adcc47b365b2e00bbdcc011a | <-- 200th record +---------+-------------------------------------+ So, the first record(id 10) should not be updated more than once, till all 200 records are updated once - the process should start over once all the records are updated once. I have some idea on how this could be achieved, but I'm sure there are more efficient ways of doing it. Any suggestions?

    Read the article

  • Printer spooler service stop running when sent print job

    - by Hanan N.
    Every time i am sending a print job to the printer, i am don't get any response from the printer, and at the printer job list at the status of the job, i see that there was an Error, but it don't give me any clue on what could be the problem. After some investigation i found that every time that i send the print job to the printer the printer spooler service stops to run, then after a second or two it start again (i think that this behavior is related to the printer spooler settings to rerun it self after it stops). Things that i have tried so far: Remove and Install again the Driver. After removing the driver, i have removed the unnecessary registry keys according to this article from Microsoft, these are: Rename all files and folders in: c:\windows\system32\spool\drivers\w32x86 Remove anything but Drivers Print and Processors: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Environment\Windows NT x86 Remove anything in here: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Monitors but: BJ Language Monitor Local Port Microsoft Document Imaging Writer Monitor Microsoft Shared Fax Monitor Standard TCP/IP Port USB Monitor WSD Port Disconnect and Reconnect the Printer. Clean the computer from Viruses & Spywares. Currently i am stuck, i have no more things to try, if anybody know about any kind of solution please let me know about it. Since i am want to keep this post as general problem that relate to the printer spooler, and not just my particular problem, i didn't included inside the windows version & the printer model, they are (although i think that it isn't relate just for that particular model): Windows 7 32bit, HP Officejet 4500 G510g-m (connect to the computer via USB). Thanks.

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • SQLAuthority News – Three Posts on Reporting – T-SQL Tuesday #005

    - by pinaldave
    If you are following my blog, you already know that I am more of “T-SQL and Performance Tuning” type of person. I do have a good understanding of Business Intelligence suit and I also do certain training sessions on the same subject. When I was writing the blog post for T-SQL Tuesday #005 – Reporting, I realized that I have written a post that clearly explains how to generate reports using SQL Server Management Studio. Here is a quick recap on how one can use SSMS and out-of-the-box reports which can help many developers. Please note that they can be resource-intensive as well, so please use SSMS carefully. SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005 Junior developers and DBA can use these reports right away and can also start learning and exploring most database performance issues with the help of Sr. DBAs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • SQL SERVER – What is Fill Factor and What is the Best Value for Fill Factor

    - by pinaldave
    Working in performance tuning area, one has to know about Index and Index Maintenance. For any Index the most important property is Fill Factor. Fill factor is the value that determines the percentage of space on each leaf-level page to be filled with data. In an SQL Server, the smallest unit is a page, which is made of  Page with size 8K. Every page can store one or more rows based on the size of the row. The default value of the Fill Factor is 100, which is same as value 0. The default Fill Factor (100 or 0) will allow the SQL Server to fill the leaf-level pages of an index with the maximum numbers of the rows it can fit. There will be no or very little empty space left in the page, when the fill factor is 100. I have written following two article about Fill Factor. What is Fill factor? – Index, Fill Factor and Performance – Part 1 What is the best value for the Fill Factor? – Index, Fill Factor and Performance – Part 2 I strongly encourage read them and provide your feedback. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – 2008 – Unused Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify unused indexes on any database. Please note, if you should not drop all the unused indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to User Scan, User Lookup and User Update when you are going to drop index. The generic understanding is if this values are all high and User Seek is low, the index needs tuning. The index drop script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Unused Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates , p.TableRows , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement' FROM sys.dm_db_index_usage_stats dm_ius INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1 AND dm_ius.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Which opcodes are faster at the CPU level?

    - by Geotarget
    In every programming language there are sets of opcodes that are recommended over others. I've tried to list them here, in order of speed. Bitwise Integer Addition / Subtraction Integer Multiplication / Division Comparison Control flow Float Addition / Subtraction Float Multiplication / Division Where you need high-performance code, C++ can be hand optimized in assembly, to use SIMD instructions or more efficient control flow, data types, etc. So I'm trying to understand if the data type (int32 / float32 / float64) or the operation used (*, +, &) affects performance at the CPU level. Is a single multiply slower on the CPU than an addition? In MCU theory you learn that speed of opcodes is determined by the number of CPU cycles it takes to execute. So does it mean that multiply takes 4 cycles and add takes 2? Exactly what are the speed characteristics of the basic math and control flow opcodes? If two opcodes take the same number of cycles to execute, then both can be used interchangeably without any performance gain / loss? Any other technical details you can share regarding x86 CPU performance is appreciated

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >