Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 39/1209 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • SQLite INTERSECT gives a huge performance decrease

    - by Derk
    I have a query that runs in less than 1 ms: SELECT product_to_value.category AS category, features.name AS featurename, featurevalues.name AS valuename FROM product_to_value, features, featurevalues WHERE product_to_value.category IN(:int, :bla, :bla1) AND product_to_value.feature = features.int AND product_to_value.value = featurevalues.int LIMIT 10 However, when I combine it with another query using INTERSECT, the query now takes more than 250ms: SELECT product_to_value.category AS category, features.name AS featurename, featurevalues.name AS valuename FROM product_to_value, features, featurevalues WHERE product_to_value.category IN(:int, :bla, :bla1) AND product_to_value.feature = features.int AND product_to_value.value = featurevalues.int INTERSECT SELECT product_to_value.category AS category, features.name AS featurename, featurevalues.name AS valuename FROM product_to_value, features, featurevalues WHERE product_to_value.category IN(:int, :bla, :bla1) AND product_to_value.feature = features.int AND product_to_value.value = featurevalues.int LIMIT 10 This can't be right. I've tried several index combinations, for example an index on all columns I use in my query, but to no avail. I've tried compound indexes as well, but they only slow things down even more. I have read a few things about SQLite and how it treats indexes. I know SQLite is capable of delivering sick performance, and surely I must be overlooking something.

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • ruby / rails / mysql performance degraded on Snow Leopard

    - by adamaig
    I've burned a bunch of hours on this. I'm not having problems getting things to build, but I am seeing that my test suite runs about 2x slower than when I was on OS X 10.5.x . I've spent a lot of time playing around with different optimization settings (learning to avoid homebrew's llvm-gcc compilation). I've just learned that I needed to tweaks /Library/Preferences/SystemConfiguration/com.apple.Boot.plist in order to get the kernel to boot in 64 bit mode. However, my rails app is still running a bit slower than before, even after warming up the mysql server. So what performance tweaks might i need to look into? Right now the stock ruby 1.8.7 runs faster than 1.9.1 for some things, and I'd really like to know if there is anything I should be looking for. All my dev software has been compiled for x86_64, mysql with -O2 optimization, using regular gcc (not llvm-gcc).

    Read the article

  • Performance of DrawingVisual vs Canvas.OnRender for lots of constantly changing shapes

    - by romkyns
    I'm working on a game-like app which has up to a thousand shapes (ellipses and lines) that constantly change at 60fps. Having read an excellent article on rendering many moving shapes, I implemented this using a custom Canvas descendant that overrides OnRender to do the drawing via a DrawingContext. The performance is quite reasonable, although the CPU usage stays high. However, the article suggests that the most efficient approach for constantly moving shapes is to use lots of DrawingVisual instances instead of OnRender. Unfortunately though it doesn't explain why that should be faster for this scenario. Changing the implementation in this way is not a small effort, so I'd like to understand the reasons and whether they are applicable to me before deciding to make the switch. Why could the DrawingVisual approach result in lower CPU usage than the OnRender approach in this scenario?

    Read the article

  • Views performance in MySQL for denormalization

    - by Gianluca Bargelli
    I am currently writing my truly first PHP Application and i would like to know how to project/design/implement MySQL Views properly; In my particular case User data is spread across several tables (as a consequence of Database Normalization) and i was thinking to use a View to group data into one large table: CREATE VIEW `Users_Merged` ( name, surname, email, phone, role ) AS ( SELECT name, surname, email, phone, 'Customer' FROM `Customer` ) UNION ( SELECT name, surname, email, tel, 'Admin' FROM `Administrator` ) UNION ( SELECT name, surname, email, tel, 'Manager' FROM `manager` ); This way i can use the View's data from the PHP app easily but i don't really know how much this can affect performance. For example: SELECT * from `Users_Merged` WHERE role = 'Admin'; Is the right way to filter view's data or should i filter BEFORE creating the view itself? (I need this to have a list of users and the functionality to filter them by role). EDIT Specifically what i'm trying to obtain is Denormalization of three tables into one. Is my solution correct? See Denormalization on wikipedia

    Read the article

  • JavaScript replace with callback - performance question

    - by Tomalak
    In JavaScript, you can define a callback handler in regex string replace operations: str.replace(/str[123]|etc/, replaceCallback); Imagine you have a lookup object of strings and replacements. var lookup = {"str1": "repl1", "str2": "repl2", "str3": "repl3", "etc": "etc" }; and this callback function: var replaceCallback = function(match) { if (lookup[match]) return lookup[match]; else return match; } How would you assess the performance of the above callback? Are there solid ways to improve it? Would if (match in lookup) //.... or even return lookup[match] | match; lead to opportunities for the JS compiler to optimize, or is it all the same thing?

    Read the article

  • Monitoring process-level performance counters in Windows Perfmon

    - by Dennis Kashkin
    I am sure everybody has bumped into this. As you scale a web server that uses multiple application pools, it's valuable to collect performance counters for each application pool 24x7. The only problem is - Perfmon links counters to application pools by process ID, so whenever an application pool recycles you have to remove the counters for the old process ID and add them for the new process ID. Since application pools recycle quite often (whenever you release a new version or patch the server), it's a major pain. I wonder if anybody has found a workaround for this? Perhaps a programmatic way to update Perfmon settings whenever an application pool starts up or some way to reference application pools by name instead of process ID? I'll appreciate any hints on this!

    Read the article

  • Live updates in website? (streamerapi)

    - by Miau
    hi all: I was in http://finance.yahoo.com/ and checked the europe tab ( markets are open here atm) you ll see trades updating live, I went to firebug to the Net tab and there was no updates... so I wonder how are they doing that? I can see there is a streamerapi.finance.yahoo.com that actually makes fiddler throw an error. Anyone knows anything else about this live update? Really interested in this, would really like to know if they are constantly polling , the updates seem to happen at irregular intervals found some info here Please note my interest is on the seeminly push aspect of it ( ie view is updated when there is new information) Cheers

    Read the article

  • MsSQL 2005 query performance

    - by Max
    I have the following query: select ............. from //one table and about 20 left joins// where ( ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) ) order by this_.id asc And I have two environment: One with 175 000 records in table "this_" and second with 25 000 records in table "this_". This query works right on 175k database and it works smth about 2 seconds, but on base with 25k this query freezes. and if drop one the folloing item from where clause: ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) query runs normally. How can I to increase performance of this query?

    Read the article

  • MYSQL OR vs IN performance

    - by Scott
    I am wondering if there is any difference in regards to performance between the following SELECT ... FROM ... WHERE someFIELD IN(1,2,3,4) SELECT ... FROM ... WHERE someFIELD between 0 AND 5 SELECT ... FROM ... WHERE someFIELD = 1 OR someFIELD = 2 OR someFIELD = 3 ... or will MySQL optimize the SQL in the same way compilers will optimize code ? EDIT: Changed the AND's to OR's for the reason stated in the comments.

    Read the article

  • Dynamic Windows Forms Components (Performance problem)

    - by Svisstack
    I have a problem with performance of my code under Windows Forms. Have a form, her layout is depending on constructor data, because he layout must be OnLoad or in Constructor generated. I generation is simple, base FlowLayoutPanel have other FlowLayoutPanels, for each have a Label and TextBox with DataBinding. Problem is this is VERY SLOW, up to 20 seconds, i drawing less than 100 controls, from Performace Session i know a problem is on 70% procesing functions: System.Windows.Forms.Control.ControlCollection.Add(class System.Windows.Forms.Control) System.Windows.Forms.ControlBindingsCollection.Add(class System.Windows.Forms.Binding) How i can do with this? Anyone help me in this problem? How solve the dynamic form layout problem?

    Read the article

  • INNER JOIN vs LEFT JOIN performance in SQL Server

    - by Ekkapop
    I've created SQL command that use INNER JOIN for 9 tables, anyway this command take a very long time (more than five minutes). So my folk suggest me to change INNER JOIN to LEFT JOIN because the performance of LEFT JOIN is better, at first time its despite what I know. After I changed, the speed of query is significantly improve. I want to know why LEFT JOIN is faster than INNER JOIN? My SQL command look like below: SELECT * FROM A INNER JOIN B ON ... INNER JOIN C ON ... INNER JOIN D and so no

    Read the article

  • Calling sp and Performance strategy.

    - by Costa
    Hi I find my self in a situation where I have to choose between either creating a new sp in database and create the middle layer code. so loose some precious development time. also the procedure is likely to contain some joins. Or use two existing sp(s), the problem of this approach is that I am doing two round trips to database. which can be poor performance especially if I have database in another server. Which approach you will go?, and why? thanks

    Read the article

  • IIS 7 Website Migration & Configuration

    - by Adam
    Hi - I am in the process of migrating an existing webserver running IIS 6 to IIS 7. I have setup the new websites on the new server but cant seem to test them as once I have entered the domain name when I selec t "browse" from within IIS 7 I get the site on my original server. How can I test the configuration of my new sites on my new server before migrating the domain names (eg updating the DNS records etc.)? Any help much appreciated.

    Read the article

  • Dotnet website - class in one file can't access class in a different file

    - by bmutch
    I've inherited a web site I'm editing in dotnet and it won't compile because the class in one file (say class1.vb) refers to a class in another file (say class2) (like Dim m_c As class2) , but the compiler says "Type Class2 is not defined". when I look in the object browswer the classes are listed separately (i.e. not all grouped under the same namespace) and appear as: Public Class Class1 Inherits System.Object Member of C:...\mywebsite\ Help!, thanks.

    Read the article

  • Multiple xmlns attributes affect page performance?

    - by Geuis
    We are working on adding some Facebook Connect functionality to our site. Part of their requirements for FB Connect require adding several additional xmlns attributes to the html element. We are likely going to have 5 or 6 of their custom attributes by the time we're done, and I want to know if this will negatively affect our page performance. I.e. will these be additional resources that the browser has to download? I have checked in Firebug and I don't see additional requests, but I don't know if that is because requests are not made by the browser, or if Firebug simply doesn't track them.

    Read the article

  • Views performance in MySQL

    - by Gianluca Bargelli
    I am currently writing my truly first PHP Application and i would like to know how to project/design/implement MySQL Views properly; In my particular case User data is spread across several tables (as a consequence of Database Normalization) and i was thinking to use a View to group data into one large table: CREATE VIEW `Users_Merged` ( name, surname, email, phone, role ) AS ( SELECT name, surname, email, phone, 'Customer' FROM `Customer` ) UNION ( SELECT name, surname, email, tel, 'Admin' FROM `Administrator` ) UNION ( SELECT name, surname, email, tel, 'Manager' FROM `manager` ); This way i can use the View's data from the PHP app easily but i don't really know how much this can affect performance. For example: SELECT * from `Users_Merged` WHERE role = 'Admin'; Is the right way to filter view's data or should i filter BEFORE creating the view itself? (I need this to have a list of users and the functionality to filter them by role).

    Read the article

  • Performance of delegate and method group

    - by BlueFox
    Hi I was investigating the performance hit of creating Cachedependency objects, so I wrote a very simple test program as follows: using System; using System.Collections.Generic; using System.Diagnostics; using System.Web.Caching; namespace Test { internal class Program { private static readonly string[] keys = new[] {"Abc"}; private static readonly int MaxIteration = 10000000; private static void Main(string[] args) { Debug.Print("first set"); test7(); test6(); test5(); test4(); test3(); test2(); Debug.Print("second set"); test2(); test3(); test4(); test5(); test6(); test7(); } private static void test2() { DateTime start = DateTime.Now; var list = new List<CacheDependency>(); for (int i = 0; i < MaxIteration; i++) { list.Add(new CacheDependency(null, keys)); } Debug.Print("test2 Time: " + (DateTime.Now - start)); } private static void test3() { DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(() => new CacheDependency(null, keys)); } Debug.Print("test3 Time: " + (DateTime.Now - start)); } private static void test4() { var p = new Program(); DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(p.GetDep); } Debug.Print("test4 Time: " + (DateTime.Now - start)); } private static void test5() { var p = new Program(); DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(() => { return p.GetDep(); }); } Debug.Print("test5 Time: " + (DateTime.Now - start)); } private static void test6() { DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(GetDepSatic); } Debug.Print("test6 Time: " + (DateTime.Now - start)); } private static void test7() { DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(() => { return GetDepSatic(); }); } Debug.Print("test7 Time: " + (DateTime.Now - start)); } private CacheDependency GetDep() { return new CacheDependency(null, keys); } private static CacheDependency GetDepSatic() { return new CacheDependency(null, keys); } } } But I can't understand why these result looks like this: first set test7 Time: 00:00:00.4840277 test6 Time: 00:00:02.2041261 test5 Time: 00:00:00.1910109 test4 Time: 00:00:03.1401796 test3 Time: 00:00:00.1820105 test2 Time: 00:00:08.5394884 second set test2 Time: 00:00:07.7324423 test3 Time: 00:00:00.1830105 test4 Time: 00:00:02.3561347 test5 Time: 00:00:00.1750100 test6 Time: 00:00:03.2941884 test7 Time: 00:00:00.1850106 In particular: 1. Why is test4 and test6 much slower than their delegate version? I also noticed that Resharper specifically has a comment on the delegate version suggesting change test5 and test7 to "Covert to method group". Which is the same as test4 and test6 but they're actually slower? 2. I don't seem a consistent performance difference when calling test4 and test6, shouldn't static calls to be always faster?

    Read the article

  • Performance optimization strategies of last resort?

    - by jerryjvl
    There are plenty of performance questions on this site already, but it occurs to me that almost all are very problem-specific and fairly narrow. And almost all repeat the advice to avoid premature optimization. Let's assume: the code already is working correctly the algorithms chosen are already optimal for the circumstances of the problem the code has been measured, and the offending routines have been isolated all attempts to optimize will also be measured to ensure they do not make matters worse What I am looking for here is strategies and tricks to squeeze out up to the last few percent in a critical algorithm when there is nothing else left to do but whatever it takes. Ideally, try to make answers language agnostic, and indicate any down-sides to the suggested strategies where applicable. I'll add a reply with my own initial suggestions, and look forward to whatever else the SO community can think of.

    Read the article

  • Testing perceived performance

    - by Josh Kelley
    I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing. Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...). I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.

    Read the article

  • Making DiveIntoPython3 work in IE8 (fixing a Javascript performance issue)

    - by srid
    I am trying to fix the performance problem with Dive Into Python 3 on IE8. Visit this page in IE8 and, after a few moments, you will see the following popup: I traced down the culprit down to this line in j/dip3.js ... find("tr:nth-child(" + (i+1) + ") td:nth-child(2)"); If I disable it (and return from the function immediately), the "Stop executing this script?" dialog does not appear as the page now loads fairly fast. I am no Javascript/jquery expert, so I ask you fellow developers as to why this query is making IE slow. Is there a fix for it? Edit: you can download the entire webpage (980K) for local viewing/editing.

    Read the article

  • How to Prove that using subselect queries in SQL is killing performance of server

    - by adopilot
    One of my jobs it to maintain our database, usually we have troubles with lack of performance while getting reports and working whit that base. When I start looking at queries which our ERP sending to database I see a lot of totally needlessly subselect queries inside main queries. As I am not member of developers which is creator of program we using, they do not like much when I criticize they code and job. Let say they do not taking my review as serious statements. So I asking you few questions about subselect in SQL Does subselect is taking a lot of more time then left outer joins? Does exists any blog, article or anything where I subselect is recommended not to use ? How I can prove that if we avoid subselesct in query that query is going to be faster ? Our database server is MSSQL2005

    Read the article

  • md5hash performance with big files for check copy files in shared folder

    - by alhambraeidos
    Hi all, My app Windows forms .NET in Win XP copy files pdfs in shared network folder in a server win 2003. Admin user in Win2003 detects some corrupt files pdfs, in that shared folder. I want check if a fileis copied right in shared folder Andre Krijen says me the best way is to create a MD5Hash of original file. When the file is copied, verify the MD5Hash file of the copied one with the original one. I have big pdf files. apply md5 hash about big file, any performance problem ?? If I only check (without generate md5 hash) Length of files (original and copied) ?? Thanks in advanced.

    Read the article

  • Timestamp as int field, query performance

    - by Kirzilla
    Hello, I'm storing timestamp as int field. And on large table it takes too long to get rows inserted at date because I'm using mysql function FROM_UNIXTIME. SELECT * FROM table WHERE FROM_UNIXTIME(timestamp_field, '%Y-%m-%d') = '2010-04-04' Is there any ways to speed this query? Maybe I should use query for rows using timestamp_field >= x AND timestamp_field < y? Thank you I've just tried this query... SELECT * FROM table WHERE timestamp_field >= UNIX_TIMESTAMP('2010-04-14 00:00:00') AND timestamp_field <= UNIX_TIMESTAMP('2010-04-14 23:59:59') but there is no any performance bonuses. :(

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >