Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 476/530 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Anyone NOT using a Web Framework? Why?

    - by tom
    I'm well aware of the many reasons to use a web framework. I'm just wondering whether anyone out there is using absolutely no web framework whatsoever to develop their web projects. I would really love to know the reason(s) why you're not using a web framework. For the sake of this discussion, your programming language of choice does not matter. Some possibilities for discussion: You don't hide behind an ORM. You don't rely on any sort of templating system. You think MVC is a really nice TLA but lacks an essential vowel or two. No need for any additional javascript framework tomfoolery. You just write as much code as possible in your native programming language(s). Summary of reasons thus far: Language learning opportunities. Specific performance reasons (write-intensive transaction processing). Seeking more nuanced control over your data and applications (less abstraction). You're building your own framework! Prove to yourself that you can succeed (or fail) just like the big framework-building gurus. Integration issues with unpopular/legacy technologies (exotic databases or protocols come to mind). Big company, lots of code, no talent nor buy-in present to move to a web framework. Some frameworks really lock you in and cannot perpetually grow along with your needs. These few black sheep don't make it easy to jump outside of the framework, write some custom code, and easily jump back in. When you finally escape the asylum, you'll never look back.

    Read the article

  • Developing a 2D Game for Windows Phone 8

    - by Vaccano
    I would like to develop a 2D game for Windows Phone 8. I am a professional Application Developer by day and this seems like a fun hobby. But I have been disapointed trying to get going. It seems that 2D games (far and away the majority of games) do not have any real support. It seems the Windows Phone makers did not include support for Direct2D. So unless you are planning to make a fully 3D app, you are out of luck. So, if you just wanted to make a nice 2D app, these are your choices: Write your game using Xaml and C# (Performance Issues?) Write your game using Direct3D and but only draw on one plane. Use the DirectX Took Kit found on codeplex. It allows you to use the dying XNA framework's API for development. Number 3 seems the best for my game. But I hate to waste my time learning the XNA api when Microsoft has clearly stated that it is not going to be supported going forward. Number 2 would work, but 3D development is really hard. I would rather not have to do all that to get the 2D effect. (Assuming Direct2D is easier. I have yet to look into that.) Number 1 seems the easiest, but I worry that my app will not run well if it is based off of xaml rendering rather than DirectX. What is the suggested method from Microsoft? And who decided that 2D games were going to get shortchanged?

    Read the article

  • Field Members vs Method Variables?

    - by Braveyard
    Recently I've been thinking about performance difference between class field members and method variables. What exactly I mean is in the example below : Lets say we have a DataContext object for Linq2SQL class DataLayer { ProductDataContext context = new ProductDataContext(); public IQueryable<Product> GetData() { return context.Where(t=>t.ProductId == 2); } } In the example above, context will be stored in heap and the GetData method variables will be removed from Stack after Method is executed. So lets examine the following example to make a distinction : class DataLayer { public IQueryable<Product> GetData() { ProductDataContext context = new ProductDataContext(); return context.Where(t=>t.ProductId == 2); } } (*1) So okay first thing we know is if we define ProductDataContext instance as a field, we can reach it everywhere in the class which means we don't have to create same object instance all the time. But lets say we are talking about Asp.NET and once the users press submit button the post data is sent to the server and the events are executed and the posted data stored in a database via the method above so it is probable that the same user can send different data after one another.If I know correctly after the page is executed, the finalizers come into play and clear things from memory (from heap) and that means we lose our instance variables from memory as well and after another post, DataContext should be created once again for the new page cycle. So it seems the only benefit of declaring it publicly to the whole class is the just number one text above. Or is there something other? Thanks in advance... (If I told something incorrect please fix me.. )

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Nhibernate + Gridview + TargetInvocationException

    - by Scott
    For our grid views, we're setting the data sources as a list of results from an Nhibernate query. We're using lazy loading, so the objects are actually proxied... most of the time. In some instances the list will consist of types of Student and Composition_Aop_Proxy_jklasjdkl31231, which implements the same members as the Student class. We've still got the session open, so the lazy loading would resolve fine, if GridView didn't throw an error about the different types in the gridview. Our current workaround is to clone the object, which results in fetching all of the data that can be lazily loaded, even though most of it won't be accessed.. ever. This, however, converts the proxy into an actual object and the grid view is happy. The performance implications kind of scare me as we're getting closer to rolling the code out as is. I've tried evicting the object after a save, which should ensure that everything is a proxy, but this doesn't seem like a good idea either. Does anyone have any suggestions/workarounds?

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • Is there an efficient way in LINQ to use a contains match if and only if there is no exact match?

    - by Peter
    I have an application where I am taking a large number of 'product names' input by a user and retrieving some information about each product. The problem is, the user may input a partial name or even a wrong name, so I want to return the closest matches for further selection. Essentially if product name A exactly matches a record, return that, otherwise return any contains matches. Otherwise return null. I have done this with three separate statements, and I was wondering if there was a more efficient way to do this. I am using LINQ to EF, but I materialize the products to a list first for performance reasons. productNames is a List of product names (input by the user). products is a List of product 'records' var directMatches = (from s in productNames join p in products on s.ToLower() equals p.name.ToLower() into result from r in result.DefaultIfEmpty() select new {Key = s, Product = r}); var containsMatches = (from d in directMatches from p in products where d.Product == null && p.name.ToLower().Contains(d.Key) select new { d.Key, Product = p }); var matches = from d in directMatches join c in containsMatches on d.Key equals c.Key into result from r in result.DefaultIfEmpty() select new {d.Key, Product = d.Product ?? (r != null ? r.Product: null) };

    Read the article

  • opengl shader make color "disappear"

    - by JFoulkes
    hi, I'm new to opengl and shaders. I'm trying to do some augmented reality on the iphone and messing about with shaders to alter a feed from the camera. What I'm trying to achieve is the appearance that an object in a picture has disappeared by setting the color to match the surrounding colour. I have a yellow rectangle and in it is a small red circle. I want to give the impressed the red circle has disappeared by setting the colour to be yellow. It won't always be solid colours but I'm just trying to get the basics down first. Currently I have a simple shader which will make a red colour lighter but this isn't ideal because it doesn't get close to the surrounding colour and I want this to work for different coloured objects and different coloured surrounding. I'm not even 100% shaders are what I need to be looking at or even opengl. I'm using it because of the performance it gives on the iPhone. I'm basically asking if: Anyone has done or seen anything similar Am I barking up the wrong tree using opengl es and opengl sl? Is this even possible? Cheers.

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • java.awt.Robot.keyPress for continuous keystrokes

    - by Deb
    So, here's my problem. I have a java program which will send keystroke messages to a game (built in Unity), based on how the user interacts with an android phone. (My java program is a listener for the android interaction over wi-fi) Now, in order to do this, I am using java.awt.Robot to send keyPresses to the game window. I have the following code block written in my listener program: if(interacting) { Robot robot = new Robot(); robot.keyPress(VK_A); robot.delay(20); //to simulate the normal keyboard rate } Now the variable interacting will be true as long as the user presses down on the touch screen of the phone, and what I intend to achieve is a continuous chain of keystroke messages being delivered to the game (through the listener). However, this is severely affecting performance, for some reason. I am noticing that the game becomes slow (rapidly dropping frame rates), and even the computer becomes slow, in general. What's going wrong? Should I use a robot.keyRelease(VK_A) after each keyPress? But my game has a different action mapped to the release of a key, and I do not want rapid key presses and releases; what I really want is to simulate continuous keystrokes, in exactly the way it would behave if the user were pressing down the A key on their keyboard manually. Please help.

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Is this 2D array initialization a bad idea?

    - by Brendan Long
    I have something I need a 2D array for, but for better cache performance, I'd rather have it actually be a normal array. Here's the idea I had but I don't know if it's a terrible idea: const int XWIDTH = 10, YWIDTH = 10; int main(){ int * tempInts = new int[XWIDTH * YWIDTH]; int ** ints = new int*[XWIDTH]; for(int i=0; i<XWIDTH; i++){ ints[i] = &tempInts[i*YWIDTH]; } // do things with ints delete[] ints[0]; delete[] ints; return 0; } So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once. The reason for the delete[] (int*) ints; is because I'm actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer. Just wondering if there's any reasons this is a horrible idea. Or if there's an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • Saving tags into a database table in CakePHP

    - by Cameron
    I have the following setup for my CakePHP app: Posts id title content Topics id title Topic_Posts id topic_id post_id So basically I have a table of Topics (tags) that are all unique and have an id. And then they can be attached to post using the Topic_Posts join table. When a user creates a new post they will fill in the topics by typing them in to a textarea separated by a comma which will then save these into the Topics table if they do not already exist and then save the references into the Topic_posts table. I have the models set up like so: Post model: class Post extends AppModel { public $name = 'Post'; public $hasAndBelongsToMany = array( 'Topic' => array('with' => 'TopicPost') ); } Topic model: class Topic extends AppModel { public $hasMany = array( 'TopicPost' ); } TopicPost model: class TopicPost extends AppModel { public $belongsTo = array( 'Topic', 'Post' ); } And for the New post method I have this so far: public function add() { if ($this->request->is('post')) { //$this->Post->create(); if ($this->Post->saveAll($this->request->data)) { // Redirect the user to the newly created post (pass the slug for performance) $this->redirect(array('controller'=>'posts','action'=>'view','id'=>$this->Post->id)); } else { $this->Session->setFlash('Server broke!'); } } } As you can see I have used saveAll but how do I go about dealing with the Topic data?

    Read the article

  • To implement a remote desktop sharing solution

    - by Cameigons
    Hi, I'm on planning/modeling phase to develop a remote desktop sharing solution, which must be web browser based. In other words: an user will be able to see and interact with someone's remote desktop using his web-browser. Everything the user who wants to share his desktop will need, besides his browser, is installing an add-in, which he's going to be prompted about when necessary. The add-in is required since (afaik) no browser technology allows desktop control from an app running within the browser alone. The add-in installation process must be as simple and transparent as possible to the user (similar to AdobeConnectNow, in case anyone's acquainted with it). The user can share his desktop with lots of people at the same time, but concede desktop control to only one of them at a time(makes no sense being otherwise). Project requirements: All technology employed must be open-source license compatible Both front ends are going to be in flash (browser) Must work on Linux, Windows XP(and later) and MacOSX. Must work at least with IE7(and later) and Firefox3.0(and later). At the very least, once the sharer's stream hits the server from where it'll be broadcast, hereon it must be broadcasted in flv (so I'm thinking whether to do the encoding at the client's machine (the one sharing the desktop) or send it in some other format to the server and encode it there). Performance and scalability are important: It must be able to handle hundreds of dozens of users(one desktop sharer, the rest viewers) We'll definitely be using red5. My doubts concern mostly implementing the desktop publisher side (add-in and streamer): 1) Are you aware of other projects that I could look into for ideas? (I'm aware of bigbluebutton.org and code.google.com/p/openmeetings) 2) Should I base myself on VNC ? 3) Bearing in mind the need to have it working cross-platform, what language should I go with? (My team is very used with java and I have some knowledge of C/C++, but anything goes really). 4) Any other advices are appreciated.

    Read the article

  • Bash PATH: How long is too long?

    - by ajwood
    Hi, I'm currently designing a software quarantine pattern to use on Ubuntu. I'm not sure how standard "quarantine" is in this context, so here is what I hope to accomplish... Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run). I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts. One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines. Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Might a path be so long that it affects performance? Thanks very much, Andrew p.s. Any other wisdom that might help my design would be greatly appreciated :)

    Read the article

  • Invoking a method overloaded where all arguments implement the same interface

    - by double07
    Hello, My starting point is the following: - I have a method, transform, which I overloaded to behave differently depending on the type of arguments that are passed in (see transform(A a1, A a2) and transform(A a1, B b) in my example below) - All these arguments implement the same interface, X I would like to apply that transform method on various objects all implementing the X interface. What I came up with was to implement transform(X x1, X x2), which checks for the instance of each object before applying the relevant variant of my transform. Though it works, the code seems ugly and I am also concerned of the performance overhead for evaluating these various instanceof and casting. Is that transform the best I can do in Java or is there a more elegant and/or efficient way of achieving the same behavior? Below is a trivial, working example printing out BA. I am looking for examples on how to improve that code. In my real code, I have naturally more implementations of 'transform' and none are trivial like below. public class A implements X { } public class B implements X { } interface X { } public A transform(A a1, A a2) { System.out.print("A"); return a2; } public A transform(A a1, B b) { System.out.print("B"); return a1; } // Isn't there something better than the code below??? public X transform(X x1, X x2) { if ((x1 instanceof A) && (x2 instanceof A)) { return transform((A) x1, (A) x2); } else if ((x1 instanceof A) && (x2 instanceof B)) { return transform((A) x1, (B) x2); } else { throw new RuntimeException("Transform not implemented for " + x1.getClass() + "," + x2.getClass()); } } @Test public void trivial() { X x1 = new A(); X x2 = new B(); X result = transform(x1, x2); transform(x1, result); }

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • Is locking on the requested object a bad idea?

    - by Quick Joe Smith
    Most advice on thread safety involves some variation of the following pattern: public class Thing { private static readonly object padlock = new object(); private string stuff, andNonsense; public string Stuff { get { lock (Thing.padlock) { if (this.stuff == null) this.stuff = "Threadsafe!"; } return this.stuff; } } public string AndNonsense { get { lock (Thing.padlock) { if (this.andNonsense == null) this.andNonsense = "Also threadsafe!"; } return this.andNonsense; } } // Rest of class... } In cases where the get operations are expensive and unrelated, a single locking object is unsuitable because a call to Stuff would block all calls to AndNonsense, degrading performance. And rather than create a lock object for each call, wouldn't it be better to acquire the lock on the member itself (assuming it is not something that implements SyncRoot or somesuch for that purpose? For example: public string Stuff { get { lock (this.stuff) { // Pretend that this is a very expensive operation. if (this.stuff == null) this.stuff = "Still threadsafe and good?"; } return this.stuff; } } Strangely, I have never seen this approach recommended or warned against. Am I missing something obvious?

    Read the article

  • Books for Computer Networking

    - by Altimet Gaandu
    Hi, I am a student of computer engineering from Vasula University, Somalia. We have a subject called Advanced Computer Networks and the following is the list of recommended books: Text Books: 1. B. A. Forouzan, "TCP/IP Protocol Suite", Tata McGraw Hill edition, Third Edition. 2. N. Olifer, V. Olifer, "Computer Networks: Principles, Technologies and Protocols for Network design", Wiley India Edition, First edition. References: 1. W.Richard Stevens, "TCP/IP Volume1, 2, 3", Addison Wesley. 2. D.E.Comer,"TCP/IPVolumeI and II", PearsonEducation. . 3.W.R. Stevens, "Unix Network Programming", Vol. 1, Pearson Education. 4. J.Walrand, P. Var~fya, "High Performance Communication Networks", Morgan Kaufmann. . 5. A.S.Tanenbaum,"Computer Networks", Pearson Education, Fourth Edition. But we have been unable to find these either in the market or on the internet (read: torrents). Please provide download links to any of these books and oblige. Thanks.

    Read the article

  • Finding What You Need in R: function arguments/parameters from outside the function's package

    - by doug
    Often in R, there are a dozen functions scattered across as many packages--all of which have the same purpose but of course differ in accuracy, performance, theoretical rigor, and so on. How do you gather all of these in one place before you start your task? So for instance: the generic plot function. Setting secondary ticks is much easier (IMHO) using a function outside of the base package, minor.tick(nx=n, ny=n, tick.ratio=n), found in Hmisc. Of course, that doesn't show up in plot's docstring. Likewise, the data-input arguments to 'plot' can be supplied by an object returned from the function 'hexbin', again, from a library outside of the base installation (where 'plot' resides). What would be great obviously is a programmatic way to gather these function arguments from the various libraries and put them in a single namespace. edit: (trying to re-state my example just above more clearly:) the arguments to plot supplied in the base package for, e.g., setting the axis tick frequency are xaxp/yaxp; however, one can also set a/t/f via a function outside of the base package, again, as in the minor.tick function from the Hmisc package--but you wouldn't know that just from looking at the plot method signature. Is there a meta function in R for this? So far, as i come across them, i've been manually gathering them in a TextMate 'snippet' (along with the attendant library imports). This isn't that difficult or time consuming, but i can only update my snippet as i find out about these additional arguments/parameters. Is there a canonical R way to do this, or at least an easier way? Just in case that wasn't clear, i am not talking about the case where multiple packages provide functions directed to the same statistic or view (e.g., 'boxplot' in the base package; 'boxplot.matrix' in gplots; and 'bplots' in Rlab). What i am talking is the case in which the function name is the same across two or more packages.

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >