Search Results

Search found 78653 results on 3147 pages for 'performance object name s'.

Page 35/3147 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Wpf: Tips for better performance

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application. Edit: Please provide me some tips for better performance in wpf!! I am using wpf Profiler but it is not much helpful for me.

    Read the article

  • accessing embedded object with jquery not working in firefox 3.6

    - by taber
    Hi, this code in my plugin used to work just fine: jQuery('#embedded_obj', context).get(0).getVersion(); and in html... <object id="embedded_obj" type="application/x-versionchecker-1.0.0.1"</object Basically trying to get the properties from an embedded object. But it looks like get(0) is returning an html object instead of the actual embedded object. For example, if I do: var launcher = jQuery('#embedded_obj', context).get(0); for(prop in launcher){ alert(prop + ': ' + launcher[prop]); } ... it alerts things like "getElementByNode," "scrollWidth," "clientLeft," "clientTop" etc. Again this worked before Firefox 3.6. Has anyone else seen this or have any ideas/suggestions? Thanks!

    Read the article

  • differences between using wmode="transparent", "opaque", "window" for an embedded object on webpage

    - by Jian Lin
    when embedding a Flash object with the <object and <embed tag, there is an attribute called "wmode". It seems that most of the time, wmode="transparent" is the same as wmode="opaque" as the Flash doesn't actually have any transparent color so that the bottom HTML element is to be shown. As a result, "opaque" should be faster than "transparent" since it require less processing for transparency, yet most of the time i see Flash object embedded with "transparent" instead of "opaque". "opaque" is needed so that other HTML element won't be covered up by the Flash object. (such as a menu item that pops up an extra sub-menu won't be covered up by the Flash object). By the way, is there formal documentation for wmode's "opaque", "transparent", and "window"? I was only able to find blogs that describe it but not the formal documentation. thanks.

    Read the article

  • Performance Improvement: Alternative for array_flip function.

    - by Rachel
    Is there any way I can avoid using array_flip to optimize performance. I am doing a select statement from database, preparing the query and executing it and storing data as an associative array in $resultCollection and than I have array op and for each element in $resultCollection am storing its outputId in op[] as evident from the code. I have explained code and so my question is how can I achieve an similar alternative for array_flip with using array_flip as I want to improve performance. $resultCollection = $statement->fetchAll(PDO::FETCH_ASSOC); $op = array(); //Looping through result collection and storing unicaOfferId into op array. foreach ($resultCollection as $output) { $op[] = $output['outputId']; } //Here op array has key as 0, 1, 2...and value as id {which I am interested in} //Flip op array to get unica offer ids as key $op = array_flip($op); //Doing a flip to get id as key. foreach ($ft as $Id => $Off) { $ft[$Id]['is_set'] = isset($op[$Id]); }

    Read the article

  • Performance logging tips

    - by Germstorm
    I am developing large data collecting ASP.Net/Windows service application-pair that uses Microsoft SQL Server 2005 through LINQ2Sql. Performance is always the issue. Currently the application is divided into multiple larger processing parts, each logging the duration of their work. This is not detailed and does not help us with anything. It would be nice to have some database tables that contain statistics that the application itself collected from its own behavior. What logging tips and data structures do you recommend to spot the parts that cause performance problems?

    Read the article

  • Declarative JDOQL vs Single-String JDOQL : performance

    - by DrDro
    When querying with JDOQL is there a performance difference between using the declarative version and the Single-String version: Example from the JDOQL doc: //Declarative JDOQL : Query q = pm.newQuery(org.jpox.Person.class, "lastName == \"Jones\" && age < age_limit"); q.declareParameters("double age_limit"); List results = (List)q.execute(20.0); //Single-String JDOQL : Query q = pm.newQuery("SELECT FROM org.jpox.Person WHERE lastName == \"Jones\"" + " && age < :age_limit PARAMETERS double age_limit"); List results = (List)q.execute(20.0); Other then performance, are there any reasons for which one is better to use then the other or is it just about the one with which we feel more comfortable.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Performance concern when using LINQ "everywhere"?

    - by stiank81
    After upgrading to ReSharper5 it gives me even more useful tips on code improvements. One I see everywhere now is a tip to replace foreach-statements with LINQ queries. Take this example: private Ninja FindNinjaById(int ninjaId) { foreach (var ninja in Ninjas) { if (ninja.Id == ninjaId) return ninja; } return null; } This is suggested replaced with the following using LINQ: private Ninja FindNinjaById(int ninjaId) { return Ninjas.FirstOrDefault(ninja => ninja.Id == ninjaId); } This looks all fine, and I'm sure it's no problem regarding performance to replace this one foreach. But is it something I should do in general? Or might I run into performance problems with all these LINQ queries everywhere?

    Read the article

  • MYSQL Convert rows to columns performance problem

    - by Tarski
    I am doing a query that converts rows to columns similar to this post but have encountered a performance problem. Here is the query:- SELECT Info.Customer, Answers.Answer, Answers.AnswerDescription, Details.Code1, Details.Code2, Details.Code3 FROM Info LEFT OUTER JOIN Answers ON Info.AnswerID = Answers.AnswerID LEFT OUTER JOIN (SELECT ReferenceNo, MAX(CASE DetailsIndicator WHEN 'cde1' THEN DetailsCode ELSE NULL END ) Code1, MAX(CASE DetailsIndicator WHEN 'cde2' THEN DetailsCode ELSE NULL END ) Code2, MAX(CASE DetailsIndicator WHEN 'cde3' THEN DetailsCode ELSE NULL END ) Code3 FROM DetailsData GROUP BY ReferenceNo) Details ON Info.ReferenceNo = Details.ReferenceNo There are less than 300 rows returned, but the Details table is about 180 thousand rows. The query takes 45 seconds to run and needs to take only a few seconds. When I type show processlist; into MYSQL it is hanging on "Sending Data". Any thoughts as to what the performance problem might be?

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • performance monitoring tools for multi-tenant web application

    - by Anton
    We have a need to monitor performance of our java web app. We are looking for some tolls which can help us with this task. The major difficulty is that we are SaaS provider with multi-tenant server architecture with hundreds of customers running on the same hardware. So far we tried commercial products like DynaTrace and Coradinat but unfortunately they don't get the job done so far. What we need is a simple report which would tell us if we had performance problems on each customer site in a specified period of time. Mostly it will be response time per customer but also we will need some more specifics based on the URLs. please let me know if someone had any experience with setting up such monitoring. Thanks!

    Read the article

  • slowly rotate an object towards another object

    - by numerical25
    I have an object that points in the direction of another object (i.e. it rotates to the direction that the second objects x and y coordinates are at) below is the code I use. var distx = target.x - x; var disty = target.y - y; var angle:Number = Math.atan2(disty, distx); var vx:Number = Math.cos(angle) * cspeed; var vy:Number = Math.sin(angle) * cspeed; rotation = angle * 180/Math.PI; x += vx; y += vy; as you can see. Not only does it rotate towards the target object, but it also moves towards it too. When I play the movie, the object instantly points to the targeted object and moves towards it. I would like for it to slowly turn towards the object instead of instantly turning towards it. anyone know how to do this.

    Read the article

  • Dynamically determining table name given field name in SQL server

    - by Salman A
    Strange situation: I am trying to remove some hard coding from my code. There is a situation where I have a field, lets say "CityID", and using this information, I want to find out which table contains a primary key called CityID. Logically, you'd say that it's probably a table called "City" but it's not... that table is called "Cities". There are some other inconsistencies in database naming hence I can never be sure if removing the string "ID" and finding out the plural will be sufficient. Note: Once I figure out that CityID refers to a table called Cities, I will perform a join to replace CityID with city name on the fly. I will appreciate if someonw can also tell me how to find out the first varchar field in a table given its name.

    Read the article

  • Apache modules: C module vs mod_wsgi python module - Performance

    - by Gopal
    Hi A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?

    Read the article

  • Visual Studio add-in for performance benchmarking

    - by chiccodoro
    I'd like to measure the performance of some code blocks in my c# winforms application. In particular I want to measure performance regression/improvement after some restructuring of the code. So long I've seen the System.Diagnostics.Stopwatch. However, I want to avoid writing measuring code into my classes, I would rather prefer to separate measuring from actual code. As for debugging, you can set breakpoints on several code lines and "jump" from one to the next by "Continue Execution", I imagine something similar for measuring: Mark to lines of code and make Visual Studio display the time elapsing from one to the next. Is there any feature/add-in in that direction?

    Read the article

  • c# object initializer complexity. best practice

    - by Andrew Florko
    I was too excited when object initializer appeared in C#. MyClass a = new MyClass(); a.Field1 = Value1; a.Field2 = Value2; can be rewritten shorter: MyClass a = new MyClass { Field1 = Value1, Field2 = Value2 } Object initializer code is more obvious but when properties number come to dozen and some of the assignment deals with nullable values it's hard to debug where the "null reference error" is. Studio shows the whole object initializer as error point. Nowadays I use object initializer for straightforward assignment only for error-free properties. How do you use object initializer for complex assignment or it's a bad practice to use dozen of assigments at all? Thank you in advance!

    Read the article

  • Accessing variables of an object of a particular class through a different class within the construc

    - by Haxed
    class Student { private String name; public Student(String name){ this.name = name; } public String getName(){ return name; } } class StudentServer { public StudentServer(){ Student[] s = new Student[30]; s[0] = new Student("Nick"); System.out.println(s[0]); // LINE 01:But this compiles, although prints junk System.out.println(s[0].getName()); // LINE 02:I get a error called cannot find symbol } public static void main(){ new StudentServer(); } } Hey, there are two lines, I want the reader to focus on, the first line prints junk as usual, but suprizingly the second one gives me an error. Do you know why ? Many Thanks

    Read the article

  • What's the performance penalty of weak_ptr?

    - by Kornel Kisielewicz
    I'm currently designing a object structure for a game, and the most natural organization in my case became a tree. Being a great fan of smart pointers I use shared_ptr's exclusively. However, in this case, the children in the tree will need access to it's parent (example -- beings on map need to be able to access map data -- ergo the data of their parents. The direction of owning is of course that a map owns it's beings, so holds shared pointers to them. To access the map data from within a being we however need a pointer to the parent -- the smart pointer way is to use a reference, ergo a weak_ptr. However, I once read that locking a weak_ptr is a expensive operation -- maybe that's not true anymore -- but considering that the weak_ptr will be locked very often, I'm concerned that this design is doomed with poor performance. Hence the question: What is the performance penalty of locking a weak_ptr? How significant is it?

    Read the article

  • error: typedef name may not be a nested-name-specifier

    - by Autopulated
    I am trying to do something along the lines of this answer, and struggling: $ gcc --version gcc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4) file.cpp:7: error: template argument 1 is invalid file.cpp:7: error: typedef name may not be a nested-name-specifier And the offending part of the file: template <class R, class C, class T0=void, class T1=void, class T2=void> struct MemberWrap; template <class R, class C, class T0> struct MemberWrap<R, C, T0>{ typedef R (C::*member_t)(T0); typedef typename boost::add_reference<typename T0>::type> TC0; // <---- offending line MemberWrap(member_t f) : m_wrapped(f){ } R operator()(C* p, TC0 p0){ GILRelease guard; return (p->*(this->m_wrapped))(p0); } member_t m_wrapped; };

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • What are some good code optimization methods?

    - by esac
    I would like to understand good code optimization methods and methodology. How do I keep from doing premature optimization if I am thinking about performance already. How do I find the bottlenecks in my code? How do I make sure that over time my program does not become any slower? What are some common performance errors to avoid (e.g.; I know it is bad in some languages to return while inside the catch portion of a try{} catch{} block

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >