Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 153/530 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • HTML Chrome Audit Specify Image Dimensions

    - by AKRamkumar
    I just started using the chrome developer tools for some basic html websites and I used the audit tool. I had two identical images, one with the height and width attribute, and one without. On the Resources section, both the latency and the download time were identical. However, the Audit showed Specify image dimensions (1) A width and height should be specified for all images in order to speed up page display. Does this actually help? And are there any other ways to speed up page time? This is only a splash page for the website I am building and as such it is only html, no css or javascript or anything. I have already compressed the images but I want to speed up load time even more. Is there a way?

    Read the article

  • Rationale behind Python's preferred for syntax

    - by susmits
    What is the rationale behind the advocated use of the for i in xrange(...)-style looping constructs in Python? For simple integer looping, the difference in overheads is substantial. I conducted a simple test using two pieces of code: File idiomatic.py: #!/usr/bin/env python M = 10000 N = 10000 if __name__ == "__main__": x, y = 0, 0 for x in xrange(N): for y in xrange(M): pass File cstyle.py: #!/usr/bin/env python M = 10000 N = 10000 if __name__ == "__main__": x, y = 0, 0 while x < N: while y < M: y += 1 x += 1 Profiling results were as follows: bash-3.1$ time python cstyle.py real 0m0.109s user 0m0.015s sys 0m0.000s bash-3.1$ time python idiomatic.py real 0m4.492s user 0m0.000s sys 0m0.031s I can understand why the Pythonic version is slower -- I imagine it has a lot to do with calling xrange N times, perhaps this could be eliminated if there was a way to rewind a generator. However, with this deal of difference in execution time, why would one prefer to use the Pythonic version?

    Read the article

  • Simple Python Challenge: Fastest Bitwise XOR on Data Buffers

    - by user213060
    Challenge: Perform a bitwise XOR on two equal sized buffers. The buffers will be required to be the python str type since this is traditionally the type for data buffers in python. Return the resultant value as a str. Do this as fast as possible. The inputs are two 1 megabyte (2**20 byte) strings. The challenge is to substantially beat my inefficient algorithm using python or existing third party python modules (relaxed rules: or create your own module.) Marginal increases are useless. from os import urandom from numpy import frombuffer,bitwise_xor,byte def slow_xor(aa,bb): a=frombuffer(aa,dtype=byte) b=frombuffer(bb,dtype=byte) c=bitwise_xor(a,b) r=c.tostring() return r aa=urandom(2**20) bb=urandom(2**20) def test_it(): for x in xrange(1000): slow_xor(aa,bb)

    Read the article

  • How to efficiently show many Images? (iPhone programming)

    - by Thomas
    In my application I needed something like a particle system so I did the following: While the application initializes I load a UIImage laserImage = [UIImage imageNamed:@"laser.png"]; UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one: // add new Laserimage UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage]; [newLaser setTag:[model.lasers count]-9]; [newLaser setBounds:CGRectMake(0, 0, 17, 1)]; [newLaser setOpaque:YES]; [self.view addSubview:newLaser]; [newLaser release]; Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array: for (int i = 0; i < [model.lasers count]; i++) { [[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]]; } I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag. So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES? Thank you very much and sorry for my english! Greetings from Germany, Thomas

    Read the article

  • KD-Trees and missing values (vector comparison)

    - by labratmatt
    I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored. I have a couple of obvious solutions, but I'm not sure either one seem very good: Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z. Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product. This has to be a common problem, any suggestions? Thanks for the help.

    Read the article

  • XDocument holding onto Memory?

    - by Jon
    I have an appplication that does a XDocument.Load from a 20mb file and then gets passed to a form to view its contents: openFileDialog1.FileName = ""; if (openFileDialog1.ShowDialog() == DialogResult.OK) { AuditFile = XDocument.Load(openFileDialog1.FileName); fmAuditLogViewer AuditViewer = new fmAuditLogViewer(); AuditViewer.ReportDocument = AuditFile; AuditViewer.Init(); AuditViewer.ShowDialog(); AuditViewer.Dispose(); AuditFile.RemoveNodes(); AuditFile = null; } In Task Manager I can see the memory being used by my application shoot up when I open this file. When I have finished viewing this file in my application I call : myXDocument.RemoveNodes(); myXDocument = null; However the memory use in Task Manager is still pretty high against my app. Is the XDocument still being held in memory and can I decrease the memory usage by my app?

    Read the article

  • Multiple WCF calls for a single ASP.NET page load

    - by Rodney Burton
    I have an existing asp.net web application I am redesigning to use a service architecture. I have the beginnings of an WCF service which I am able to call and perform functions with no problems. As far as updating data, it all makes sense. For example, I have a button that says Submit Order, it sends the data to the service, which does the processing. Here's my concern: If I have an ASP.NET page that shows me a list of orders (View Orders page), and at the top I have a bunch of drop down lists for order types, and other search criteria which is populated by querying different tables from the database (lookup tables, etc). I am hoping to eventually completely decouple the web application from the DB, and use data contracts to pass information between the BLL, the SOA, and the web app. With that said, how can I reduce the # of WCF calls needed to load my "View Orders" page? I would need to make 1 call get the list of orders, and 1 call for each drop down list, etc because those are populated by individual functions in my BLL. Is it good architecture to create a web service method that returns back a specialized data contract that consists of everything you would need to display a View Orders page, in 1 shot? Something like this pseudocode: public class ViewOrderPageDTO { public OrderDTO[] Orders { get; set; } public OrderTypesDTO[] OrderTypes { get; set; } public OrderStatusesDTO[] OrderStatuses { get; set; } public CustomerListDTO[] CustomerList { get; set; } } Or is it better practice in the page_load event to make 5 or 6 or even 15 individual calls to the SOA to get the data needed to load the page? Therefore, bypassing the need for specialized wcf methods or DTO's that conglomerate other DTO? Thanks for your input and suggestions.

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • data sync from rails with iphone sqlite db

    - by Markus
    Hi all, I'm wondering whether there is a bossibility to export some selected data from my rails mysql db to a other sqlite db. The aim is to send that sqlite file directly to my iPhone application... That way I don't have to do a lot of xml integration in the iPhone app. That seems to be very slow... Markus

    Read the article

  • Writing a DBMS in Python

    - by Matt Luongo
    Hey guys, I'm working on a basic DBMS as a pet project and planning to prototype in Python. I figure there's a reason there are only a few Python databases, and my gut agrees that my favorite language will be too slow to act as an honest performing database, but I'm looking forward to using it to learn what I need quickly. Would someone please contradict me? Is Python as ill-suited right now for this sort of thing as I think?

    Read the article

  • SQL Overlapping and Multi-Column Indexes

    - by durilai
    I am attempting to tune some stored procedures and have a question on indexes. I have used the tuning advisor and they recommended two indexes, both for the same table. The issue is one index is for one column and the other is for multiple columns, of which it includes the same column from the first. My question is why and what is the difference? CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23_K17_K13_K12_K2_K10_K22_K14_K19_K20_K9_K11_5_6_7_15_18] ON [dbo].[Table1] ( [EfctvEndDate] ASC, [StuLangCodeKey] ASC, [StuBirCntryCodeKey] ASC, [StuBirStOrProvncCodeKey] ASC, [StuKey] ASC, [GndrCodeKey] ASC, [EfctvStartDate] ASC, [StuHspncEnctyIndctr] ASC, [StuEnctyMsngIndctr] ASC, [StuRaceMsngIndctr] ASC, [StuBirDate] ASC, [StuBirCityName] ASC ) INCLUDE ( [StuFstNameLgl], [StuLastOrSrnmLgl], [StuMdlNameLgl], [StuIneligSnorImgrntIndctr], [StuExpctdGrdtngClYear] ) WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23] ON [dbo].[Table1] ( [EfctvEndDate] ASC )WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]

    Read the article

  • How can I get page faults statistics from kernel

    - by osgx
    Hello How can I get page faults statistics from kernel for my application while it is running? What about other events, like inter-cpu migrations count in SMP nodes, or number of context switches? I want to count such events for various small parts of the program. Thanks.

    Read the article

  • How to speed up the reading of innerHTML in IE8?

    - by Dennis Cheung
    I am using JQuery with the DataTable plugin, and now I have a big performnce issue on the following line. aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 I have a ajax call, and result string in HTML format. I convert them into HTML nodes, and that part is ok. var $result = $('<div/>').html(result).find("*:first"); // simlar to $result=$(result) but much more faster in Fx Then I activate enable the result from a plain table to a sortable datatable. The speed is acceptable in Fx (around 4sec for 900 rows), but unacceptable in IE8 (more then 100 seconds). I check it deep using the buildin profiler, and found the above single line take all 99.9% of the time, how can I speed it up? anything I missed? nTrs = oSettings.nTable.getElementsByTagName('tbody')[0].childNodes; for ( i=0, iLen=nTrs.length ; i<iLen ; i++ ) { if ( nTrs[i].nodeName == "TR" ) { iThisIndex = oSettings.aoData.length; oSettings.aoData.push( { "nTr": nTrs[i], "_iId": oSettings.iNextId++, "_aData": [], "_anHidden": [], "_sRowStripe": '' } ); oSettings.aiDisplayMaster.push( iThisIndex ); aLocalData = oSettings.aoData[iThisIndex]._aData; nTds = nTrs[i].childNodes; jInner = 0; for ( j=0, jLen=nTds.length ; j<jLen ; j++ ) { if ( nTds[j].nodeName == "TD" ) { aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 jInner++; } } } }

    Read the article

  • Same query, different execution plans

    - by A..
    Hi, I am trying to find a solution for a problem that is driving me mad... I have a query which runs very fast in a QA Server but it is very slow in production. I realised that they have different execution plans... so I have try recompiling, cleanning the cache for the execution plans, update statistics, check the type of collation... but I still can't find what's going on... The databases where the query is running are exactly the same and the SQL Servers have also the same configuration. Any new ideas would be much appreciated. Thanks, A.

    Read the article

  • Are Hibernate named HQL queries (in annotations) optimised?

    - by Graham Lea
    A new colleague has just suggested using named HQL queries in Hibernate with annotations (i.e. @NamedQuery) instead of embedding HQL in our XxxxRepository classes. What I'd like to know is whether using the annotation provides any advantage except for centralising quueries? In particular, is there some performances gain, for instance because the query is only parsed once when the class is loaded rather than every time the Repository method is executed?

    Read the article

  • Oracle EXECUTE IMMEDIATE changes explain plan of query.

    - by Gunny
    I have a stored procedure that I am calling using EXECUTE IMMEDIATE. The issue that I am facing is that the explain plan is different when I call the procedure directly vs when I use EXECUTE IMMEDIATE to call the procedure. This is causing the execution time to increase 5x. The main difference between the plans is that when I use execute immediate the optimizer isn't unnesting the subquery (I'm using a NOT EXISTS condition). We are using Rule Based Optimizer here at work. Example: Fast: begin package.procedure; end; / Slow: begin execute immediate 'begin package.' || proc_name || '; end;'; end; /

    Read the article

  • Combine static files or load in parallel

    - by Niall Collins
    I am at present introducing code to my site to combine css and javascript files. Is there a way without having to include an external library to load javascript asynchronously or in parallel? I have read on some blogs that combining of files can be counter productive as the load of the http request can be large and its better to load multiple files in parallel. Opinions on this? I am caching my javascript/css. And would have thought it was better to combine rather than have multiple http requests.

    Read the article

  • How to formulate a SQL Server indexed view that aggregates distinct values?

    - by Jeremy Lew
    I have a schema that includes tables like the following (pseudo schema): TABLE ItemCollection { ItemCollectionId ...etc... } TABLE Item { ItemId, ItemCollectionId, ContributorId } I need to aggregate the number of distinct contributors per ItemCollectionId. This is possible with a query like: SELECT ItemCollectionId, COUNT(DISTINCT ContributorId) FROM Item GROUP BY ItemCollectionId I further want to pre-calculate this aggregation using an indexed (materialized) view. The DISTINCT prevents an index being placed on this view. Is there any way to reformulate this which will not violate SQL Server's indexed view constraints?

    Read the article

  • Server Benchmarking: What tools to use with my real-world test data

    - by mdemmitt
    I want to benchmark a new server using historical HTTP-request data. I have a textfile that contains one day's worth of real historical requests to a production server. What is the best tool for sending that list of requests on the server I'm testing? The tool I use should be able to configure the following: Number of threads making the requests Number of requests/second sent A list of request URLs to use when making the requests. Apache Bench seems like a close fit. However, Bench does not seem to be able to take in a list of request URLs as a parameter. What would you recommend?

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >