Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 69/646 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • Mysql Performance Question - Essentially about normalizing efficiency

    - by freqmode
    Hi there. Just a quick question about database performance. I'll outline my site purpose below as background. I'm creating a dictionary site that saves the words users define to a database. What I'm wondering is whether or not to create a words table for each user or to keep one massive words table. This site will be used for entire schools so the single words table would be massive! The database structure is as follows: A user table with: User_ID PRIMARY KEY Username First Last Password Email Country Research Standings SendInfo Donated JoinedOn LastLogin Logins Correct Attempts Admin Active And one word table with: User_ID PRIMARY KEY Word Vocab Spell Defined DefinedAttempted Spelled SpelledAttempted Sentenced SentencedAttempted So what I'm asking is , performance-wise, should I create a new table for each user when they join the site - each user could have hundreds or thousands of words over time? Or is it better to have one massive table with thousands and thousands of records and filter by User_ID. I don't think I'll perform many table joins. My gut feeling is to create a new table for each user, but I thought I'd ask for expert advice! Thanks in advance.

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Displaying performance metrics in a modern web app?

    - by Charles
    We're updating our ancient internal PHP application at work. Right now, we gather extensive performance measurements on every pageview, and log them to the database. Additionally, users requested that some of the metrics be displayed at the bottom of the page. This worked out pretty well for us, because the last thing that the application does on every request is include the file containing the HTML footer. The updated parts of the application use an MVC framework and a Dispatch/Request/Response loop. The page footer is no longer the last thing done. In fact, it could very well be the first thing done, before the rest of the page is created. Because we can grab the Response before it's returned to the user, we could try to include placeholders for the performance metrics in the footer and simply replace them with the actual numbers, but this strikes me as a bad idea somehow. How do you handle this in your modern web app? While we're using PHP, I'm curious how it's done in a Ruby/Rails app, and in your favorite Python framework.

    Read the article

  • PHP: Opening/closing tags & performance?

    - by Tom
    Hi, This may be a silly question, but as someone relatively new to PHP, I'm wondering if there are any performance-related issues to frequently opening and closing PHP tags in HTML template code, and if so, what might be best practices in terms of working with php tags? My question is not about the importance/correctness of closing tags, or about which type of code is more readable than another, but rather about how the document gets parsed/executed and what impact it might have on performance. To illustrate, consider the following two extremes: Mixing PHP and HTML tags: <?php echo '<tr> <td>'.$variable1.'</td> <td>'.$variable2.'</td> <td>'.$variable3.'</td> <td>'.$variable4.'</td> <td>'.$variable5.'</td> </tr>' ?> // PHP tag opened once Separating PHP and HTML tags: <tr> <td><?php echo $variable1 ?></td> <td><?php echo $variable2 ?></td> <td><?php echo $variable3 ?></td> <td><?php echo $variable4 ?></td> <td><?php echo $variable5 ?></td> </tr> // PHP tag opened five times Would be interested in hearing some views on this, even if it's just to hear that it makes no difference. Thanks.

    Read the article

  • Performance Problems with Django's F() Object

    - by JayhawksFan93
    Has anyone else noticed performance issues using Django's F() object? I am running Windows XP SP3 and developing against the Django trunk. A snippet of the models I'm using and the query I'm building are below. When I have the F() object in place, each call to a QuerySet method (e.g. filter, exclude, order_by, distinct, etc.) takes approximately 2 seconds, but when I comment out the F() clause the calls are sub-second. I had a co-worker test it on his Ubuntu machine, and he is not experiencing the same performance issues I am with the F() clause. Anyone else seeing this behavior? class Move (models.Model): state_meaning = models.CharField( max_length=16, db_index=True, blank=True, default='' ) drop = models.ForeignKey( Org, db_index=True, null=False, default=1, related_name='as_move_drop' ) class Split(models.Model): state_meaning = models.CharField( max_length=16, db_index=True, blank=True, default='' ) move = models.ForeignKey( Move, related_name='splits' ) pickup = models.ForeignKey( Org, db_index=True, null=False, default=1, related_name='as_split_pickup' ) pickup_date = models.DateField( null=True, default=None ) drop = models.ForeignKey( Org, db_index=True, null=False, default=1, related_name='as_split_drop' ) drop_date = models.DateField( null=True, default=None, db_index=True ) def get_splits(begin_date, end_date): qs = Split.objects \ .filter(state_meaning__in=['INPROGRESS','FULFILLED'], drop=F('move__drop'), # <<< the line in question pickup_date__lte=end_date) elapsed = timer.clock() - start print 'qs1 took %.3f' % elapsed start = timer.clock() qs = qs.filter(Q(drop_date__gte=begin_date) | Q(drop_date__isnull=True)) elapsed = timer.clock() - start print 'qs2 took %.3f' % elapsed start = timer.clock() qs = qs.exclude(move__state_meaning='UNFULFILLED') elapsed = timer.clock() - start print 'qs3 took %.3f' % elapsed start = timer.clock() qs = qs.order_by('pickup_date', 'drop_date') elapsed = timer.clock() - start print 'qs7 took %.3f' % elapsed start = timer.clock() qs = qs.distinct() elapsed = timer.clock() - start print 'qs8 took %.3f' % elapsed

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • Hosting Multiple hosts under IIS for WCF

    - by Josh
    Hello everyone, I need to use multiple hosts under IIS for WCF. We're using wshttpbinding and we've found NO success so far even after checking out a couple of similar questions on stackoveflow. Here is my web.config <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings/> <connectionStrings> <add name="ConString" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=WebSMS20July;Integrated Security=True"/> </connectionStrings> <system.web> <customErrors mode="Off"/> <!--<httpRuntime maxRequestLength="999999999" useFullyQualifiedRedirectUrl="true" executionTimeout="459999999" appRequestQueueLimit="99999999" delayNotificationTimeout="999999999" maxWaitChangeNotification="999999999" shutdownTimeout="9999999999"/>--> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows"/> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors>--> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://localhost:12350"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <bindings> <wsHttpBinding> <binding name="wsHttpBinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="None" /> <message clientCredentialType="None" negotiateServiceCredential="false" establishSecurityContext="false" /> </security> </binding> <binding name="NewBinding0" /> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="WcfService1.Service1Behavior" name="WcfService1.Service1"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="wsHttpBinding" bindingName="wsHttpBinding" contract="WcfService1.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <endpoint binding="wsHttpBinding" bindingConfiguration="wsHttpBinding" bindingName="wsHttpBinding2" contract="WcfService1.IService1" listenUri="http://localhost:8090" /> <host> <baseAddresses> <add baseAddress="http://mydomain/mywcfservice/Service1.svc" /> <add baseAddress="http://localhost/mywcfservice/Service1.svc" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfService1.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Here's my service factory class using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ServiceModel; using System.ServiceModel.Activation; namespace WcfService1 { public class CustomHostFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { //CustomHost customServiceHost = // new CustomHost(serviceType, baseAddresses[1]); //return customServiceHost; ServiceHost host; host = new ServiceHost(serviceType, baseAddresses[0]); return host; } class CustomHost : ServiceHost { public CustomHost(Type serviceType, params Uri[] baseAddresses) : base(serviceType, baseAddresses) { } protected override void ApplyConfiguration() { base.ApplyConfiguration(); } } } } Contents of my Service1.svc file <%@ ServiceHost Language="C#" Debug="true" Service="WcfService1.Service1" CodeBehind="Service1.svc.cs" Factory="WcfService1.CustomHostFactory" %> What could possibly be wrong? Would appreciate any help. Thanks.

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Get The Most From MySQL Database With MySQL Performance Tuning Training

    - by Antoinette O'Sullivan
    Get the most from MySQL Server's top-level performance by improving your understanding of perforamnce tuning techniques. MySQL Performance Tuning Class In this 4 day class, you'll learn practical, safe, highly efficient ways to optimize performance for the MySQL Server. You can take this class as: Training-on-Demand: Start training within 24 hours of registering and follow the instructor-led lecture material through streaming video at your own pace. Schedule time lab-time to perform the hands-on exercises at your convenience. Live-Virtual Class: Follow the live instructor led class from your own desk - no travel required. There are already a range of events on the schedule to suit different timezones and with delivery in languages including English and German. In-Class Event: Travel to a training center to follow this class. For more information on this class, to see the schedule or register interest in additional events, go to http://oracle.com/education/mysql Troubleshooting MySQL Performance with Sveta Smirnova  During this one-day, live-virtual event, you get a unique opportunity to hear Sveta Smirnova, author of MySQL Troubleshooting, share her indepth experience of identifying and solving performance problems with a MySQL Database. And you can benefit from this opportunity without incurring any travel costs! Dimitri's Blog If MySQL Performance is a topic that interests you, then you should be following Dimitri Kravtchuk's blog. For more information on any aspect of the Authentic MySQL Curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • .NET Web Service (asmx) Timeout Problem

    - by Barry Fandango
    I'm connecting to a vendor-supplied web ASMX service and sending a set of data over the wire. My first attempt hit the 1 minute timeout that Visual Studio throws in by default in the app.config file when you add a service reference to a project. I increased it to 10 minutes, another timeout. 1 hour, another timeout: Error: System.TimeoutException: The request channel timed out while waiting for a reply after 00:59:59.6874880. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutE xception: The HTTP request to 'http://servername/servicename.asmx' has exceeded the allotted timeout of 01:00:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.WebExcept ion: The operation has timed out at System.Net.HttpWebRequest.GetResponse() [... lengthly stacktrace follows] I contacted the vendor. They confirmed the call may take over an hour (don't ask, they are the bane of my existence.) I increased the timeout to 10 hours to be on the safe side. However the web service call continues to time out at 1 hour. The relevant app.config section now looks like this: <basicHttpBinding> <binding name="BindingName" closeTimeout="10:00:00" openTimeout="10:00:00" receiveTimeout="10:00:00" sendTimeout="10:00:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2147483647" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> Pretty absurd, but regardless the timeout is still kicking in at 1 hour. Unfortunately every change takes at least an additional hour to test. Is there some internal limit that I'm bumping into - another timeout setting to be changed somewhere? All changes to these settings up to one hour had the expected effect. Thanks for any help you can provide!

    Read the article

  • .NET Webservice Client: Auto-retry upon call failure

    - by Yon
    Hi, We have a .NET client calling a Java webservice using SSL. Sometimes the call fails due to poor connectivity (the .NET Client is a UI that is used from the weirdest locations). We would like to implement an automatic retry mechanism that will automatically retry a failed call X times before giving up. This should be done solely with specific types of connectivity exceptions (and not for exceptions generated by the web service itself). We tried to find how to do it on the Binding/Channel level, but failed... any ideas? Thanks, yonadav

    Read the article

  • Performance Testing a .NET Smart Client Application (.NET ClickOnce technology)

    - by jn29098
    Has anyone ever had to run performance tests on a ClickOnce application? I have engaged with a vendor who had trouble setting up their toolset with our software because it is Smart Client based. They are understandably more geared toward purely browser-based applications. I wonder if anyone has had to tackle this before and if so would you recommend any vendors who use industry standard tools such as Load Runner (which i assume can handle the smart client)? Thanks

    Read the article

  • Jython webapp performance

    - by DrPep
    I'm currently building a Jython web app but am concerned about Jython application performance. I take some comfort in that any compute intensive tasks I can write in a separate Java jar and invoke them from Jython. Has anyone had problems doing this, or forsee issues with such a setup?

    Read the article

  • OTRS slow performance main queue listing

    - by mrml
    we have the problem of a very slow queue listing. If there are more than 15 tickets in a single queue it takes up to 4-5 seconds for creating the view. This problems occure since we're using OTRS 3.1 We are running OTRS 3.1.4 with the KIX4OTRS extension on a virtualized Ubuntu 10.04 LTS. We tried yet: known performance tweaks provided in the manuals. creating extra database indexes installation on physical machines (no positive effects) with Ubuntu 12.04 / 12.04.1 Any ideas?

    Read the article

  • Wpf: Performance Issue

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application.

    Read the article

  • Star Schema vs Snowflake Schema performance

    - by Megawolt
    Hi... I'm begin to developing a scial sharing website so I'm curious about database design Schema... So in Data-Mining Star-Schema is the best one but how about a social sharing website... And as a nature of the SS websites there will be (i hope :)) many users in same time... Which better for performance for overdose using...

    Read the article

  • High performance web (-services) applications

    - by User Friendly
    Hi, I'd like to become a guru in high performance web & web-services applications. What technologies/patterns/skills do you reccomend to look at? Basically, I have good skills at ASP.NET/.NET based web development, but I'd like to know how big things are built (on any platform, not depending on .net technology stack). Thank you.

    Read the article

  • int, short, byte performance in back-to-back for-loops

    - by runrunraygun
    (background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c) To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte. DateTime t; long a, b, c; t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (short index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } b=DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (byte index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } c=DateTime.Now.Ticks - t.Ticks; Console.WriteLine(a.ToString()); Console.WriteLine(b.ToString()); Console.WriteLine(c.ToString()); This gives roughly consistent results in the area of... ~950000 ~2000000 ~1700000 which is in line with what i would expect to see. However when I try repeating the loops for each data type like this... t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; the numbers are more like... ~4500000 ~3100000 ~300000 Which I find puzzling. Can anyone offer an explanation? NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • Enable MEX in a Web.Config

    - by Conor
    Hi Folks, How do I enable/create a MEX endpoint in the below web config so I can view the service from my browser? I have tried a few variation from googling but VS always complains about it. (not a valid child element etc...) <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service name="MyApp.MyService" behaviorConfiguration="WebServiceBehavior"> <endpoint address="" binding="webHttpBinding" contract="MyApp.IMyService" behaviorConfiguration="JsonBehavior"> <identity> <dns value="localhost"/> </identity> </endpoint> </service> </services> <behaviors> <endpointBehaviors> <behavior name="JsonBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> Cheers, Conor

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >