Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 98/646 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Which opcodes are faster at the CPU level?

    - by Geotarget
    In every programming language there are sets of opcodes that are recommended over others. I've tried to list them here, in order of speed. Bitwise Integer Addition / Subtraction Integer Multiplication / Division Comparison Control flow Float Addition / Subtraction Float Multiplication / Division Where you need high-performance code, C++ can be hand optimized in assembly, to use SIMD instructions or more efficient control flow, data types, etc. So I'm trying to understand if the data type (int32 / float32 / float64) or the operation used (*, +, &) affects performance at the CPU level. Is a single multiply slower on the CPU than an addition? In MCU theory you learn that speed of opcodes is determined by the number of CPU cycles it takes to execute. So does it mean that multiply takes 4 cycles and add takes 2? Exactly what are the speed characteristics of the basic math and control flow opcodes? If two opcodes take the same number of cycles to execute, then both can be used interchangeably without any performance gain / loss? Any other technical details you can share regarding x86 CPU performance is appreciated

    Read the article

  • DevDays ‘00 The Netherlands day #2

    - by erwin21
    Day 2 of DevDays 2010 and again 5 interesting sessions at the World Forum in The Hague. The first session of the today in the big world forum theater was from Scott Hanselman, he gives a lap around .NET 4.0. In his way of presenting he talked about all kind of new features of .NET 4.0 like MEF, threading, parallel processing, changes and additions to the CLR and DLR, WPF and all new language features of .NET 4.0. After a small break it was ready for session 2 from Scott Allen about Tips, Tricks and Optimizations of LINQ. He talked about lazy and deferred executions, the difference between IQueryable and IEnumerable and the two flavors of LINQ syntax. The lunch was again very good prepared and delicious, but after that it was time for session 3 Web Vulnerabilities and Exploits from Alex Thissen. This was no normal session but more like a workshop, we decided what kind of subjects we discussed, the subjects where OWASP, XSS and other injections, validation, encoding. He gave some handy tips and tricks how to prevent such attacks. Session 4 was about the new features of C# 4.0 from Alex van Beek. He talked about Optional- en Named Parameters, Generic Co- en Contra Variance, Dynamic keyword and COM Interop features. He showed how to use them but also when not to use them. The last session of today and also the last session of DevDays 2010 was about WCF Best Practices from Gerben van Loon. He talked about 7 best practices that you must know when you are going to use WCF. With some quick demos he showed the problem and the solution for some common issues. It where two interesting days and next year i sure will be attending again.

    Read the article

  • Top 10 Posts in 2010

    - by dwahlin
    Blogging’s a lot of fun and a great way to share what you’ve learned. It’s also a great way to learn based upon comments people leave that help you see things in an entirely new way in some cases.  Since we’ve now moved on to 2011 (Happy New Year’s!) I wanted to list the Top 10 posts from my blog during 2010 based on individual views.  Thanks to everyone who follows my blog and adds comments from time to time. Here’s wishing everyone a great 2011!   1. Reducing Code by Using jQuery Templates 2. Integrating HTML into Silverlight Applications 3. Silverlight is Dead, the Moon is Made of Cheese, and HTML 5 is Ready for Prime Time 4. Understanding the Role of Commanding in Silverlight 4 Applications 5. New Article – Getting Started with WCF RIA Services 6. Simplify Your Code with LINQ 7. My Favorite iPad Apps….So Far 8. Final Release of Silverlight Tools for Visual Studio 2010 Released 9. Handling WCF Service Paths in Silverlight 4 – Relative Path Support 10. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application   Getting Started with the MVVM Pattern in Silverlight Applications – Posted late 2009 so I’m giving it honorable mention status since it’s still one of the most popular posts.

    Read the article

  • Service Testing made easy with SO-Aware Test Workbench

    - by cibrax
    I happy to announce today a new addition to our SO-Aware service repository toolset, SO-Aware Test Workbench, a WPF desktop application for doing functional and load testing against existing WCF Services. This tool is completely integrated to the SO-Aware service repository, which makes configuring new load and functional tests for WCF Soap and REST services a breeze. From now on, the service repository can play a very important role in an organization by facilitating collaboration between developers and testers. Developers can create and register new services in the repository with all the related artifacts like configuration. On the other hand, Testers can just pick one of the existing services in the repository and create functional or load tests from there, with no need to deal with specific details of the service implementation, location or configuration settings. Developers and Testers can later use the result of those tests to modify the services or adjust different settings on the tests or service configuration. Gustavo Machado, one of the developers behind this project, has written an excellent post describing all the functionality that can find today in the tool. You can also see the tool in action in this Endpoint Tv episode with Jesus and Ron Jacobs.

    Read the article

  • If I implement a web-service, how do I respond to POST requests with JSON?

    - by Vova Stajilov
    I have to make a rather complex system for my diploma work. Logically it will consist of the following components: Database Web-service Management with web interface Client iOS application that will consume web-service I decided to implement all the first three components under .NET. Firstly I will create a database depending on the information load - this is clear. But then I need a web-service that will return data in JSON format for iOS clients to consume - that's obvious and not that hard to implement. For this I will use WCF technology. Now I have a question, if I implement the web-service, how will I be able to respond to POST requests with JSON? It probably involves WCF JSON or something related? But I also need some web pages as admin part, so will this web-application be able to consume my centralized web-services as well or I should develop it separately? I just want my web service to act like a set of controllers. There is a related question here but this doesn't quite reflect my question.

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • What CPU hardware performance counter tool do you guys use?

    - by Hao Shen
    I just want to know the popular tool that I can use. Originally, I used Perfmon2. However, recently, I installed the lastest Ubuntu 12 on my Ivy Bridge i7 machine. Then there is some compilation error for the Pfmon. :< From here http://comments.gmane.org/gmane.comp.linux.perfmon2.devel/3255 , it seems that the Perfmon2 will not work after the kernel 2.6.30. So it suggests to use perf tool. I just want to confirm is there any popular application level performance counter monitor tool which can be used for the later kernel version?

    Read the article

  • SQL Server "Long running transaction" performance counter: why no workee?

    - by Sleepless
    Please explain to me the following observation: I have the following piece of T-SQL code that I run from SSMS: BEGIN TRAN SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) COMMIT TRAN The query takes about twenty seconds to run. I have no other user queries running on the server. Under these circumstances, I would expect the performance counter "MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time" to rise constantly up to a value of 20 and then drop rapidly. Instead, it rises to around 12 within two seconds and then oscillates between 12 and 14 for the duration of the query after which it drops again. According to the MS docs, the counter measures "The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction." But apparently, it doesn't. What gives?

    Read the article

  • What could cause Vista machines to degrade in performance?

    - by gencha
    We have installed several machines running Windows Vista at a client site. These machines run the same set of applications each day and are not operated with by any user. They mainly run a 3D application for digital signage purposes. After a few months I noticed that several machines suffer from really bad stuttering. The performance is unacceptable. At first I figured it was faulty hardware. But after putting a fresh image on them, they run perfectly fine again. These are Core i7 machines with GTX 280 graphics adapters. They should be able to handle the workload without any trouble (and they do, at least for a while). I really have no idea how to track down the source of this issue. Especially since not all of the machines display the same issue.

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • How anti-virus on the host machne affects performance of virtual machines?

    - by Ladislav Mrnka
    I'm diagnosing some issue with Oracle virtual box where virtual machine sometimes perform terribly slow (much slower then notebook with worse configuration): Notebook i7 (2 cores with HT = 4 logical CPUs), 4GB RAM, 5400 rpm disk, Win 7 64bit Virtual machine (Oracle Virtual Box) Host: i7 (4 cores with HT = 8 logical CPUs, 12 GB RAM, system runs from SSD, virtual machine from 7200 rpm disk, Win 7 64bit) Virtual machine: 4 cores assigned, 8 GB RAM assigned, Win 2008 R2 Enterprise (64 bit) Virtual machine uses bridge to separate network interface (machine has two) VPN for network communication No other virtual machine runs on the host Host has installed ESET Smart Security All SW is updated with last version. My question is if anti-virus on the host machine can somehow affect performance of the virtual machine and if so how can I turn it off without turning the anti-virus itself?

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • Does NTFS performance degrade significantly in volumes larger than five or six TB?

    - by Josh Yeager
    One of my customers is planning to set up a new document store, which will probably grow by 1-2TB per year. One of my co-workers says that Windows performance is extremely bad if it has a single NTFS volume that is bigger than five or six TB. He thinks that we need to set up their system with multiple volumes so that no single volume will exceed that limit. Is this a real problem? Does Windows or NTFS slow down when the volume size reaches several terabytes? Or is it possible to create a single volume of 10 or more TB?

    Read the article

  • Is there a MySQL performance benchmark to measure the impact of utf8_unicode_ci versus utf8_general_ci?

    - by MiniQuark
    I read here and there that using the utf8_unicode_ci collation ensures a better treatment of unicode text (for example, it knowns how to expand characters such as 'œ' into 'oe' for searching and ordering) compared to the default utf8_general_ci which basically just strips diacritics. Unfortunately, both sources indicate that utf8_unicode_ci is slightly slower than utf8_general_ci. So my question is: what does "slightly slower" mean? Has anyone run benchmarks? Are we talking about a -0.01% performance impact or rather something like -25%? Thanks for your help.

    Read the article

  • How can I get a windows server to collate and email 4 daily reports/graphs for server performance

    - by Glyn Darkin
    I run a windows 2008 webserver and would like to setup the most basic performance monitoring in the world. What i would like is: a plot of ASP.Net request time for each w3wp process for a 24hour period a plot of CPU% utilisation for each w3wp process for a 24hour period a plot of Memory utilisation for each w3wp process for a 24hour period a plot of network utilisation for each w3wp process for a 24hour period a plot of disk utilisation for each w3wp process for a 24hour period I would these plots to be emailed to me each morning. Anybody know what is the simplest way to set this up? Thanks for your help in advance. Glyn

    Read the article

  • Performance problem with an Intel Wireless 5100

    - by Piet
    I just installed 12.04 on a HP Elitebook 2530p. The performance with wireless connection to an Eminem 4551 is very bad. Wired connection just works great. If I connect to my router (Eminent 4551) wired I have a good performance accessing internet pages with Firefox. When I try to access the same internet pages wireless I get a real bad performance. piet@piet-HP-EliteBook-2530p:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:03.0 Communication controller: Intel Corporation Mobile 4 Series Chipset MEI Controller (rev 07) 00:03.2 IDE interface: Intel Corporation Mobile 4 Series Chipset PT IDER Controller (rev 07) 00:03.3 Serial controller: Intel Corporation Mobile 4 Series Chipset AMT SOL Redirection (rev 07) 00:19.0 Ethernet controller: Intel Corporation 82567LM Gigabit Network Connection (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M-E LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 02:00.0 Network controller: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection 44:06.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 05) 44:06.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 22) piet@piet-HP-EliteBook-2530p:~$

    Read the article

  • SQL SERVER – SQL Server Performance: Indexing Basics – SQL in Sixty Seconds #006 – Video

    - by pinaldave
    A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analyzing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behavior and designing our applications appropriately will make sure the application is performed to its highest potential. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. More on Indexes: SQL Index SQL Performance I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Here is the interview of Vinod Kumar by myself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • Configuration Tips for better Performance with ADF Mobile Apps

    - by SRINI INDLA
    Some tips to keep in mind to make sure ADF Mobile application's performance is optimal: 1. Select release mode in deployment profile. This is perhaps the most important thing to remember to ensure best performance for ADF Mobile Apps. Selecting this option causes the deployer to package optimized JVM and minified JS libs with the mobile app there by significantly improving the over all performance of the application. 2. For iOS you do not need to do anything else other than selecting  release mode in deploy profile. However, on Android you have to create a keystore and configure it in JDev --> Tools --> Preferences --> ADF Mobile --> Platforms : Android as shown in the snapshot below 3. Steps for generating the Keystore for Android using keytool :  4. Logging level setting in logging.properties: Make sure the log level is set to SEVERE for both framework logger as well as the application logger as follows oracle.adfmf.framework.level=SEVERE oracle.adfmf.application.level=SEVERE 5. When using SOAP WebServices with WebService Data Control make sure you select the option to copy the WSDL. This will cause the JDev to download the WSDL and all the XSDs referenced by the WSDL from the server at design time and package them with the application during deployment. This way the application does not incur the cost of downloading these resources at run time from the device.

    Read the article

  • Bad Performance With Games Under Wine in Ubuntu 12.04

    - by Pandem
    I'm somewhat new to Linux. I've dabbled with it before but never more than just experimentation in a VM or dual boot where I almost always reverted back to Windows soon after (more due to a lack of commitment than a dislike for the OS). Anyway my problem is that in the two games (Guild Wars and Team Fortress 2) I've tried so far I get fairly bad performance. Despite both games having a "Platinum" rating I've still been forced to run both in DirectX 8, otherwise they either crash, have texture defects or the UI is missing. By "bad performance" I mean sub-30 FPS where there are some players and regular large framerate drops with video settings on low. Currently in my mind I believe the performance issues are just a problem with the drivers because my graphics card is a AMD 7850 which is virtually brand new and probably not properly supported yet but I'm unsure and would appreciate advice or tips to improve things. I have Ubuntu 12.04 installed and I am usually logged in a Gnome Classic session. I have installed AMD's proprietary drivers (Catalyst version 12.4) and have Wine 1.4 installed. I have used Winetricks to install DX9/DX10 dlls and other things.

    Read the article

  • Oracle ZFSSA Hybrid Storage Pool Demo

    - by Darius Zanganeh
    The ZFS Hybrid Storage Pool (HSP) has been around since the ZFSSA first launched.  It is one of the main contributors to the high performance we see on the Oracle ZFSSA both in benchmarks as well as many production environments.  Below is a short video I made to show at a high level just how impactful this HSP pool is on storage performance.  We squeeze a ton of performance out of our drives with our unique use of cache, write optimized ssd and read optimized ssd.  Many have written and blogged about this technology, here it is in action. Demo of the Oracle ZFSSA Hybrid Storage Pool and how it speeds up workloads.

    Read the article

  • "Optimal" game loop for 2D side-scroller

    - by MrDatabase
    Is it possible to describe an "optimal" (in terms of performance) layout for a 2D side-scroller's game loop? In this context the "game loop" takes user input, updates the states of game objects and draws the game objects. For example having a GameObject base class with a deep inheritance hierarchy could be good for maintenance... you can do something like the following: foreach(GameObject g in gameObjects) g.update(); However I think this approach can create performance issues. On the other hand all game objects' data and functions could be global. Which would be a maintenance headache but might be closer to an optimally performing game loop. Any thoughts? I'm interested in practical applications of near optimal game loop structure... even if I get a maintenance headache in exchange for great performance.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >