Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 456/597 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • How to set MinWorkingSet and MaxWorkingSet in a 64-bit .NET process?

    - by Gravitas
    How do I set MinWorkingSet and MaxWorking set for a 64-bit .NET process? p.s. I can set the MinWorkingSet and MaxWorking set for a 32-bit process, as follows: [DllImport("KERNEL32.DLL", EntryPoint = "SetProcessWorkingSetSize", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern bool SetProcessWorkingSetSize(IntPtr pProcess, int dwMinimumWorkingSetSize, int dwMaximumWorkingSetSize); [DllImport("KERNEL32.DLL", EntryPoint = "GetCurrentProcess", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern IntPtr MyGetCurrentProcess(); // In main(): SetProcessWorkingSetSize(Process.GetCurrentProcess().Handle, int.MaxValue, int.MaxValue); Update: Unfortunately, even if we do this call, the garbage collection trims the working set down anyway, bypassing MinWorkingSet (see "Automatic GC.Collect() in the diagram below). Question: Is there a way to lock the WorkingSet (the green line) to 1GB, to avoid the spike in page faults (the red lines) that occur when allocating new memory into the process? p.s. Every time a page fault occurs, it blocks the thread for 250us, which hits application performance badly.

    Read the article

  • Problem running iPhone application on iPhone from Xcode (and in Instruments)

    - by Ben
    Hi I have a problem running one application on the iPhone from Xcode (or Instruments). When I try to run the app I get the error message Failed to upload XXX.app in the bottom left corner of Xcode. The strange thing is it actually uploaded the app to the iPhone but it doesn't start it (after this I can start the app by hand on the iPhone). So without being able to start the app from Xcode or instruments I have no chance of debugging/performance testing. Any advice on what might be going wrong here? The iPhone console shows me this: Thu Oct 1 14:25:18 unknown mobile_installationd[1976] <Error>: 00808e00 install_embedded_profile: Skipping the installation of the embedded profile Thu Oct 1 14:25:23 unknown SpringBoard[25] <Warning>: Reloading and rendering all application icons. Other applications work fine. I've tried this on two iPhones (both 3.1) with the same result. I am running Xcode 3.2 on SnowLeopard. Regards

    Read the article

  • ASP.Net Authentication with MVC2--how to integrate with DB?

    - by alchemical
    I'm trying to understand the authentication section sample project that opens in a new MVC2 project in VS2010. It essentially lets you register, login, etc. I looked through the code that implements this briefly, it looked fairly complicated. (10 tables, 40 sprocs, 10 views, 4 models, 1 model, 1 controller, etc.) Is it best to utilize this provided framework for authentication? If so, how would I integrate this with my own database models (which has user and role tables, etc.). Also, if I use their framework, are there any performance issues at higher traffic volumes (like SO for example), do I need to become responsible for maintaining the authentication DB as well in this case?

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • how can I code a recursive query in an Entity Framework model?

    - by Greg
    Hi, I have a model which includes NODES, and RELATIONSHIPS (that tie the nodes together, via a parent_node, child_node arrangement). Q1 - Is there any way in EF / Linq-to-entities to perform a query on nodes (e.g. context.Nodes..) to find say "all parents" or "or children" in the graph? Q2 - If there's not in Linq-to-entities, is there any other way to do this other than writing a method that manually goes through and doing it? Q3 - If manual is the only way to do it, should I be concerned about the number of database hits that will be going out to the database as the method keeps recursing through the data? Or more specifically, is there any EF caching type feature that might assist here in ensuring the method is performance from a "number of database hits" point of view? thanks thanks

    Read the article

  • Software Engineering undergraduate project ideas

    - by Nasser Hajloo
    There was a similar post at << Computer science undergraduate project ideas << Ideas for Software Engineering Thesis Project << Senior computer engineering project ideas ? << Final Year Project(Software Engineering) Idea So I read all of them and my answer wasn't fit to those. Actually I'm looking for some ideas which 1 - Help me extend a functionality of Open source software (like creating a usefull add-in 2 - Let me Create a Scientific Paper (ideas to publish a scientific paper) 3 - Or Create a Unique an usefull application from the scratch , (like performance tool, profiler, analyzers and other similar tools) I know C# - Asp.net and sql So with all these conditions what do you think is better to do? let me know your ideas whatever those are. any idea appriciated.

    Read the article

  • ASP.NET MVC 2 disable cache for browser back button in partial views

    - by brainnovative
    I am using Html.RenderAction<CartController>(c => c.Show()); on my master Page to display the cart for all pages. The problem is when I add an item to the cart and then hit the browser back button. It shows the old cart (from Cache) until I hit the refresh button or navigate to another page. I've tried this and it works perfectly but it disables the Cache globally for the whole page an for all pages in my site (since this Action method is used on the master page). I need to enable cache for several other partial views (action methods) for performance reasons. I wouldn't like to use client side script with AJAX to refresh the cart (and login view) on page load - but that's the only solution I can think of right now. Does anyone know better?

    Read the article

  • Problem setting up Master-Master Replication in MySQL

    - by Andrew
    I am attempting to setup Master-Master Replication on two MySQL database servers. I have followed the steps in this guide, but it fails in the middle of Step 4 with SHOW MASTER STATUS; It simply returns an empty set. I get the same 3 errors in both servers' logs. MySQL errors on SQL1: [ERROR] Failed to open the relay log './sql1-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure MySQL Errors on SQL2: [ERROR] Failed to open the relay log './sql2-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure The errors make no sense because I'm not referencing those files in any of my configurations. I'm using Ubuntu Server 10.04 x64 and my configuration files are copied below. I don't know where to go from here or how to troubleshoot this. Please help. Thanks. /etc/mysql/my.cnf on SQL1: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL1's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = <SQL2's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ /etc/mysql/my.cnf on SQL2: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL2's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 2 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 2 master-host = <SQL1's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • CUDA Driver API vs. CUDA runtime

    - by Morten Christiansen
    When writing CUDA applications, you can either work at the driver level or at the runtime level as illustrated on this image (The libraries are CUFFT and CUBLAS for advanced math): I assume the tradeoff between the two are increased performance for the low-evel API but at the cost of increased complexity of code. What are the concrete differences and are there any significant things which you cannot do with the high-level API? I am using CUDA.net for interop with C# and it is built as a copy of the driver API. This encourages writing a lot of rather complex code in C# while the C++ equivalent would be more simple using the runtime API. Is there anything to win by doing it this way? The one benefit I can see is that it is easier to integrate intelligent error handling with the rest of the C# code.

    Read the article

  • Is SiteCore slow and buggy?

    - by Larsenal
    I've seen plenty of negative comments regarding performance and general bugginess. However, to be fair, most of these look like they were within the v5.3 timeframe. Have they fixed all of those issues in v6.0? Is it an excellent product? Some examples of the complaints: Maybe it’s just a case of user error, but this guy says, “some pages take as much as 20s to render…” Source Here, in the comments, one fellow remaks, “Sitecore backend is incredible slow. Sitecore developement is really pain, it takes from 2 minutes to start sitecore and many many seconds to do small backend operations. They claim to have a quick client, but that is a BIG LIE. All developers in my company really hate sitecore for being so slow.” Source Another search yielded, “Sitecore’s users listed three issues as number one: licensing, the server as a resource hog, and the site’s slow responsiveness.” Source

    Read the article

  • ASP.NET AJAX ToolkitScriptManager Issue With Combining Scripts

    - by Kumar
    I have an ASP.NET 3.5 web application in which i am using the ToolkitScriptManager as below: <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" EnablePageMethods="true" ScriptMode="Release" LoadScriptsBeforeUI="false" runat="server" CombineScripts="false"> <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JavaScript/jquery-1.4.1.min.js" /> <asp:ScriptReference Path="~/JavaScript/Validators.js" /> </Scripts> </CompositeScript> </ajaxToolkit:ToolkitScriptManager> This works fine but from a performance standpoint this is not good as the pages are making a lot of requests to the webresources.axd and scriptresource.axd files. When I changed the CombineScripts property to true my ASP.NET AJAX control extenders are no longer working. What is the reason for this weired behavior and is there a fix for this?

    Read the article

  • JSR-299 CDI / Weld vs. Google Guice

    - by deamon
    Weld, the JSR-299 Contexts and Dependency Injection reference implementation, considers itself as a kind of successor of Spring and Guice. CDI was influenced by a number of existing Java frameworks, including Seam, Guice and Spring. However, CDI has its own, very distinct, character: more typesafe than Seam, more stateful and less XML-centric than Spring, more web and enterprise-application capable than Guice. But it couldn't have been any of these without inspiration from the frameworks mentioned and lots of collaboration and hard work by the JSR-299 Expert Group (EG). http://docs.jboss.org/weld/reference/latest/en-US/html/1.html What makes Weld more capable for enterprise application compared to Guice? Are there any advantages or disadvantages compared to Guice? What do you think about Guice AOP compared to Weld interceptors? What about performance?

    Read the article

  • Compiled Linq & String.Contains

    - by sharru
    i'm using linq-to-sql and i'm use complied linq for better performance. I have a users table with a INT field called "LookingFor" that can have the following values.1,2,3,12,123,13,23. I wrote a query to return the users based on the "lookingFor" column i want to return all users that contains the "lookingFor" value (not only those equal to it). In example if user.LookingFor = 12 , and query paramter is 1 this user should be selected. private static Func<NeDataContext, int, IQueryable<int>> MainSearchQuery = CompiledQuery.Compile((NeDataContext db, int lookingFor) => (from u in db.Users where (lookingFor == -1 ? true : u.LookingFor.ToString().Contains(lookingFor) select u.username); This WORKS on non complied linq but throws error when using complied. How do i fix it to work using complied linq? I get this error: Only arguments that can be evaluated on the client are supported for the String.Contains method.

    Read the article

  • Search algorithm (with a sort algorithm already implemented)

    - by msr
    Hello, Im doing a Java application and Im facing some doubts in which concerns performance. I have a PriorityQueue which guarantees me the element removed is the one with greater priority. That PriorityQueue has instances of class Event (which implements Comparable interface). Each Event is associated with a Entity. The size of that priorityqueue could be huge and very frequently I will have to remove events associated to an entity. Right now Im using an iterator to run all the priorityqueue. However Im finding it heavy and I wonder if there are better alternatives to search and remove events associated with an entity "xpto". Any suggestions? Thanks!

    Read the article

  • ListView items not clickable with HorizontalScrollView inside

    - by Felix
    I have quite a complicated ListView. Each item looks something like this: > LinearLayout (vertical) > LinearLayout (horizontal) > include (horizontal LinearLayout with two TextViews) > include (ditto) > include (ditto) > TextView > HorizontalScrollView (this guy is my problem) > LinearLayout (horizontal) In my activity, when an item is created (getView() is called) I add dynamic TextViews to the LinearLayout inside the HorizontalScrollView (besides filling the other, simpler stuff out). Amazingly, performance is pretty good. My problem is that when I added the HorizontalScrollView, my list items became unclickable. They don't get the orange background when clicked and they don't fire the OnItemClickedListener I have set up (to do a simple Log.d call). How can I make my list items clickable again?

    Read the article

  • Method 'SingleOrDefault' not supported by Linq to Entities

    - by user300992
    I read other posts on similar problem on using SingleOfDefault on Linq-To-Entity, some suggested using "First()" and some others suggested using "Extension" method to implement the Single(). This code throws exception: Movie movie = (from a in movies where a.MovieID == '12345' select a).SingleOrDefault(); If I convert the object query to a List using .ToList(), "SingleOrDefault()" actually works perfectly without throwing any error. My question is: Is it not good to convert to List? Is it going to be performance issue for more complicated queries? What does it get translated in SQL? Movie movie = (from a in movies.ToList() where a.MovieID == '12345' select a).SingleOrDefault();

    Read the article

  • C# - Recursive / Reflection Property Values

    - by tyndall
    What is the best way to go about this in C#? string propPath = "ShippingInfo.Address.Street"; I'll have a property path like the one above read from a mapping file. I need to be able to ask the Order object what the value of the code below will be. this.ShippingInfo.Address.Street Balancing performance with elegance. All object graph relationships should be one-to-one. Part 2: how hard would it be to add in the capability for it to grab the first one if its a List< or something like it.

    Read the article

  • Erlang: 2 database on 2 webserver ??

    - by xRobot
    I have created a blogging system with php+postgresql. Now I want to add a web chat ( in REAL TIME for Million of users simultaneously ) where every message is saved in database. I am thinking to use Erlang+Mnesia on a different webserver for this issue. Message's table will be like this: message_id, user_id, message, date user_id should be related with users table in Postgresql database in another webserver. How can I do that without lose performance ? If you have any other creative solutions tell me please ;).

    Read the article

  • Silverlight local storage

    - by IMHO
    As you may know Silverlight has support for local storage. We are looking at creating Sl application that will work in off line mode. This application may require quite a bit of data to be cached on the client side. Obvious solution - use local storage with some sort of XMl based structure won't work as our PoC showed due to performance issues. We are looking at several 3rd party solutions that implement light database engines on top of SL local storage. If you have solved this problem in the past or have any ideas - I would appreciate some pointers and ideas.

    Read the article

  • Delphi 2010 Professional and remote database access

    - by Kico Lobo
    Hi, When looking for which version of Delphi 2010 to buy, we found the following limitation on the professional one: Delphi 2010 Professional is designed for developers building high-performance desktop GUI and touch-screen applications with (or without) embedded and local database persistence. What does this really mean? Does this mean that we'll only face this restriction if we choose to use the native vcl components for database access we'll face this restriction. And what if we choose to use ADO components instead of those? In this case, how can Delphi avoid us to access remote database servers? Did anyone here ever tried this? Going even further: if we choose to use a database like Firebird, which is just one file, and used a network mapped drive. Could we be facing the same limitation? Supposing we opt for ADO, what will be the main consequences?

    Read the article

  • c# Network Programming - HTTPWebRequest Scraping

    - by masterguru
    Hi, I am building a web scraping application. It should scrape a complex web site with concurrent HttpWebRequests from a single host to a single target web server. The application should run on Windows server 2008. One single HttpWebRequest for data could take from 1 minute to 4 minutes to complete (because of long running db operations) I should have at least 100 parallel requests to the target web server, but i have noticed that when i use more then 2-3 long-running requests i have big performance issues (request timeouts/hanging). How many concurrent requests can i have in this scenario from a single host to a single target web server? can i use Thread Pools in the application to run parallel HttpWebRequests to the server? will i have any issues with the default outbound HTTP connection/requests limits? what about Request timeouts when i reach outbound connection limits? what would be the best setup for my scenario? Any help would be appreciated. Thanks

    Read the article

  • Collision Detection - Java - Rectangle

    - by Trizicus
    I would like to know if this is a good idea that conforms to best practices that does not lead to obscenely confusing code or major performance hit(s): Make my own Collision detection class that extends Rectangle class. Then when instantiating that object doing something such as Collision col = new Rectangle(); <- Should I do that or is that something that should be avoided? I am aware that I 'can' but should I? I want to extend Rectangle class because of the contains() and intersects() methods; should I be doing that or should I be doing something else for 2D collision detection in Java?

    Read the article

  • OutOfMemoryException - out of ideas II

    - by Captain Comic
    Hello, This question is related to my previous question. The storyline: I have an application which consumes a lot of memory if you look at task manager VMSize. I am trying to find out what consumes this amount of memory. You see in the picture below that VM size is 2,46 GB Ok now I am looking at .net performance counters Committed and reserved bytes add up to only 1,2 GB Now lets look at windb sos debugging. Let's run eeheap -gc command The heap size used by GC is only 340 MB. Where is the rest of used memory? I need to discover why WM size in TaskManager is 2.4 GB

    Read the article

  • Why not port Linux kernel to Common Lisp?

    - by rplevy
    Conventional wisdom states that OS kernels must be written in C in order to achieve the necessary levels of performance. This has been the justification for not using more expressive high level languages. However, for a few years now implementations of Common Lisp such as SBCL have proven to be just as performant as C. What then are the arguments against redoing the kernel in a powerfully expressive language, namely Common Lisp? I don't think anyone (at least anyone who knows what they are talking about) could argue against the fact that the benefits in transparency and readability would be tremendous, not to mention all the things that can't be done in C that can be done in Lisp, but there may be implementation details that would make this a bad idea.

    Read the article

  • SPWeb ProcessBatchData

    - by BeraCim
    Hi all: I'm currently using SPWeb's ProcessBatchData to update a lot of list items/folders into Sharepoint database. For example, in one batch update I have ~1200 objects to update. I also run it numerous times during different phases in my app. I found that it is generally faster and more efficient in updating large amount of items. But lately the performance of the app has decreased as more processes get added to the app (and also more batch updates). I was wondering whether the ProcessBatchData would be a suspect in slowing down the app? E.g. temporarily locking up db, draining resources, lagging network, etc. Thanks.

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >