Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 72/86 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Do you like languages that let you put the "then" before the "if"?

    - by Matt Hamilton
    I was reading through some C# code of mine today and found this line: if (ProgenyList.ItemContainerGenerator.Status != System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated) return; Notice that you can tell without scrolling that it's an "if" statement that works with ItemContainerGenerator.Status, but you can't easily tell that if the "if" clause evaluates to "false" the method will return at that point. Realistically I should have moved the "return" statement to a line by itself, but it got me thinking about languages that allow the "then" part of the statement first. If C# permitted it, the line could look like this: return if (ProgenyList.ItemContainerGenerator.Status != System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated); This might be a bit "argumentative", but I'm wondering what people think about this kind of construct. It might serve to make lines like the one above more readable, but it also might be disastrous. Imagine this code: return 3 if (x > y); Logically we can only return if x y, because there's no "else", but part of me looks at that and thinks, "are we still returning if x <= y? If so, what are we returning?" What do you think of the "then before the if" construct? Does it exist in your language of choice? Do you use it often? Would C# benefit from it?

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • Refactoring or Rewriting Monolithic PHP Spaghetti Codebase

    - by nategood
    I've inherited a really poorly designed PHP spaghetti code project. It's been gaining a good bit of traffic recently and is starting to have performance issues on top of the poor monolithic code base. Its maxing out performance on a chunky 16GB dedicated machine when it really shouldn't be. I'm planning on doing some performance tweaks right off the bat to help the performance issue, but this still won't really help the horrible code base. The team is small but expecting to grow very soon. I've read Joel's article on the troubles of doing a complete rewrite and see the concerns. But how bad does the code base have to be before you consider a rewrite? There is PHP handling logic interjected into what one would usually consider a "view". Even worse, in some places SQL statements are in these same files! The only real separation of presentation and logic are a few PHP scripts that serve as function libraries. These scripts do most of the ORM stuff... if you can even call it that. Trying to slowly refractor this seems like a nightmare. Open to your thoughts and opinions... however not interested in hearing, "Run away, Run away!".

    Read the article

  • How to deploy a single webapp with multiple web-modules that may be removed or added individually

    - by Daniel Bleisteiner
    We currently run two separate webapps (WARs) deployed in one single EAR containing additional JARs and settings. To improve our deployment I want to split one of these webapps into different modules that may be build and packaged individually. But I've currently no clue on how to package these modules so that I'm able to add or remove them as desired - at best during runtime. The webapp is getting more and more complex and I'd like to separate some of the functionality into modules. These modules should be packaged as single archives. As long as they contain only classes and resources loaded through code I know how to do this (simple JARs). But how about JSPs? Normally a WAR file contains JSPs or HTML files. I my case it are JSF pages utilizing JBoss Seam and RichFaces. These modules will add classes, resources and JSF pages and other includes to the running webapplication. Is it somehow possible to deploy them as individual archives to serve the same running webapp? We are using Maven for our build and packaging and deploy into JBoss v4.

    Read the article

  • Model class for NSDictionary information with Lazy Loading

    - by samfu_1
    My application utilizes approx. 50+ .plists that are used as NSDictionaries. Several of my view controllers need access to the properties of the dictionaries, so instead of writing duplicate code to retrieve the .plist, convert the values to a dictionary, etc, each time I need the info, I thought a model class to hold the data and supply information would be appropriate. The application isn't very large, but it does handle a good deal of data. I'm not as skilled in writing model classes that conform to the MVC paradigm, and I'm looking for some strategies for this implementation that also supports lazy loading.. This model class should serve to supply data to any view controller that needs it and perform operations on the data (such as adding entries to dictionaries) when requested by the controller functions currently planned: returning the count on any dictionary adding one or more dictionaries together Currently, I have this method for supporting the count lookup for any dictionary. Would this be an example of lazy loading? -(NSInteger)countForDictionary: (NSString *)nameOfDictionary { NSBundle *bundle = [NSBundle mainBundle]; NSString *plistPath = [bundle pathForResource: nameOfDictionary ofType: @"plist"]; //load plist into dictionary NSMutableDictionary *dictionary = [[NSMutableDictionary alloc] initWithContentsOfFile: plistPath]; NSInteger count = [dictionary count] [dictionary release]; [return count] }

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • Running Different Modules on Tomcat/Server

    - by umesh awasthi
    Hi All, I have started working on my own project idea but on the starting i have strucked in the structure decision may be lack of knowledge is the promary factor. i am wondering how we can make 2 different modules to co-operate on the server.Here is is explanation of what i am trying to ask as per my design i need 2 modules for my application 1. Back end handling where i can do all content handling as well as other admin task a console which will be capable of handling everything from ceating importing contents to everything. 2 USer end which is only an interface to the end user to use the application. to visualize the things its kinda e-commerce application one is back office management and other is user end of web-shop. since these two module will be very much interlinked but i don't want them to mix up i want them to develop independently since admin is one which is core and it will also going to serve user interface. my question is how i can develop two module independently but on the other hand i want them to co-operate to accomplish the task as a whole. Really sorry in advance if i am not making any sense. Thanks in advance

    Read the article

  • In Mercurial, can I apply changes from one file to another file in the same branch?

    - by Stephen
    In the good old days of Subversion, I would sometimes derive a new file from an existing one using svn copy. Then if something changed in sections they had in common, I could still use svn merge to update the derived version. To use the example from hginit.com, say the "guac" recipe already exists, and I want to create a "superguac" that includes instructions on how to serve guacamole to 1000 raving soccer fans. Using the process I just described, I could: svn cp guac superguac svn ci -m "Created superguac by copying guac" (edit superguac) svn ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) svn ci -m "Fixed a typo in guac" svn merge -r3:4 guac superguac and thus the typo fix would be applied to superguac. Mercurial provides an hg copy command that marks a file as a copy of the original, but I'm not sure the repository structure supports a similar workflow. Here's the same example, and I carefully only edit a single file in the commit I want to use in the merge: hg cp guac superguac hg ci -m "Created superguac by copying guac" (edit superguac) hg ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) hg ci -m "Fixed a typo in guac" I now want to apply the change in guac to superguac. Is that possible? If so, what's the right command? Is there a different workflow in Mercurial that achieves the same results (limited to a single branch)?

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages

    - by lostincode
    Can someone please explain to me what is going on with python in ubuntu 9.04? I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ? The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's. So my questions are: How do I get virtualenv to point to one of the dist-packages? Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/ What is the point of /usr/lib/python2.6/site-packages? There's nothing in there! Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes? Why the hell is this so confusing? Is there something I'm missing here? Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages? Will this affect pip as well? Thanks to anyone who can clear this up!

    Read the article

  • Telerik RadGrid: grid clientside pagination

    - by ram
    I have a web service which returns me some data,I am massaging this data and using this as datasource for my radgrid (telerik). The datasource is quite large, and would like to paginate it. I found couple of problems when I paginate it in the server side I have to bind the grid again for pagination, which essentially means I have to make a call to WS again to get the data. This is an expensive call for me. I would rather forgo the benefits of pagination and would display all the results in the same page, except for it would be a bit clumsy During the postback RadGrid1.Items.Count happens to be the number of items getting paginated (25- in my case) which is expected as all the items in the datasource are not getting bound. This of course is not an issue. The real issue is that we have some checkboxes which get checked based on some business condition. We add this to our business object/DB later. So if the user has not navigated all the pages, these "checked" items do not get added as pagination limits the "Items" in the grid to those which get bound for that particular page index. My Thoughts: I would rather have some sort of client side pagination, where we can hide/show contents than going to the server and doing a databind every time. Though it will return all the results, the UI will not be clumsy and the grid would have "all the items" during postback Is there a way to do it ? If it were a regular asp.net gridView, can someone point me to a good article which would serve my purpose Ram PS: who else think radgrid is crazy ? (unfortunately I did not make this choice)

    Read the article

  • How to catch non exist requested URL in Java servlet ?

    - by Frank
    My objects are stored online in two different places : <1 On my nmjava.com site, where I can put them in a directory called "Dir_My_App/Dir_ABC/" <2 On Google App Engine datastore When my Java app runs it checks both places for the objects, I designed the app so that it tries to get an object from a Url, it doesn't care whether it's an object in a directory or an object returned by a servlet. My_Object Get_Object(String Site_Url,String Object_Path) { ... get object by the name of Object_Path from the Site_Url ... } Now the request Url for my web site nmjava.com might look like this : http://nmjava.com/Dir_My_App/Dir_ABC/My_Obj_123 [ In a directory ] Or in the case of Google App Engine servlet : http://nm-java.appspot.com/Check_License/Dir_My_App/Dir_ABC/My_Obj_123 [ Non exist ] The "Object_Path" was generated by my app automatically. It can now get the object from my site by the above method like this : My_Object Get_Object("http://nmjava.com","/Dir_My_App/Dir_ABC/My_Obj_123"); In the Google App Engine, my servlet is running and ready to serve the object, if the request comes in correctly, but since I don't want to design my app to know whether the object is in one site's directory or in other site's datastore, I need to design the servlet to catch the non exist Url, such as the one above, and be able to make a call : My_Object Get_Object("http://nm-java.appspot.com/Check_License","/Dir_My_App/Dir_ABC/My_Obj_123"); So my question is : When a request comes into the servlet with a non exist Url, how should it catch it and analyze the url in order to respond properly, in my case it should know that : http://nm-java.appspot.com/Check_License/Dir_My_App/Dir_ABC/My_Obj_123 is asking for the object "My_Obj_123" [ ignore the dirs ] and return the object from the datastore. Now I'm getting this : Error: Not Found The requested URL /Check_License/Dir_My_App/Dir_ABC/My_Obj_123 was not found on this server. Where in my servlet and how do I detect the request for this non exist Url ?

    Read the article

  • The Current State Of Serving a PHP 5.x App on the Apache, LightTPD & Nginx Web Servers?

    - by Gregory Kornblum
    Being stuck in a MS stack architecture/development position for the last year and a half has prevented me from staying on top of the world of open source stack based web servers recent evolution more than I would have liked to. However I am now building an open source stack based application/system architecture and sadly I do not have the time to give each of the above mentioned web servers a thorough test of my own to decide. So I figured I'd get input from the best development community site and more specifically the people who make it so. This is a site that is a resource for information regarding a specific domain and target audience with features to help users not only find the information but to also interact with one another in various ways for various reasons. I chose the open source stack for the wealth of resources it has along with much better offers than the MS stack (i.e. WordPress vs BlogEngine.NET). I feel Java is more in the middle of these stacks in this regard although I am not ruling out the possibility of using it in certain areas unrelated to the actual web app itself such as background processes. I have already come to the conclusion of using PHP (using CodeIgniter framework & APC), MySQL (InnoDB) and Memcached on CentOS. I am definitely serving static content on Nginx. However the 3 servers mentioned have no consensus on which is best for dynamic content in regards to performance. It seems LightTPD still has the leak issue which rules it out if it does, Nginx seems it is still not mature enough for this aspect and of course Apache tries to be everything for everybody. I am still going to compile the one chosen with as many performance tweaks as possible such as static linking and the likes. I believe I can get Apache to match the other 2 in regards to serving dynamic content through this process and not having it serve anything static. However during my research it seems the others are still worth considering. So with all things considered I would love to hear what everyone here has to say on the matter. Thanks!

    Read the article

  • How to create a new ID for the new added node?

    - by marknt15
    Hi, I can normally get the ID of the default tree nodes but my problem is onCreate then jsTree will add a new node but it doesn't have an ID. My question is how can I add an ID to the newly created tree node? What I'm thinking to do is adding the ID HTML attribute to the newly created tree node but how? I need to get the ID of all of the nodes because it will serve as a reference for the node's respective div storage. HTML code: <div class="demo" id="demo_1"> <ul> <li id="phtml_1" class="file"><a href="#"><ins>&nbsp;</ins>Root node 1</a></li> <li id="phtml_2" class="file"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> </div> JS code: $("#demo_1").tree({ ui : { theme_name : "apple" }, callback : { onrename : function (NODE, TREE_OBJ) { alert(TREE_OBJ.get_text(NODE)); alert($(NODE).attr('id')); } } }); Cheers, Mark

    Read the article

  • EFv1 mapping 1 to many Relationship to POCOs

    - by Scott
    I'm trying to work through a problem where I'm mapping EF Entities to POCO which serve as DTO. I have two tables within my database, say Products and Categories. A Product belongs to one category and one category may contain many Products. My EF entities are named efProduct and efCategory. Within each entity there is the proper Navigation Property between efProduct and efCategory. My Poco objects are simple public class Product { public string Name { get; set; } public int ID { get; set; } public double Price { get; set; } public Category ProductType { get; set; } } public class Category { public int ID { get; set; } public string Name { get; set; } public List<Product> products { get; set; } } To get a list of products I am able to do something like public IQueryable<Product> GetProducts() { return from p in ctx.Products select new Product { ID = p.ID, Name = p.Name, Price = p.Price ProductType = p.Category }; } However there is a type mismatch error because p.Category is of type efCategory. How can I resolve this? That is, how can I convert p.Category to type Category? I know in .NET EF has added support for POCO, but I'm forced to use .NET 3.5 SP1.

    Read the article

  • Multiple webroot folders with Jetty

    - by Lóránt Pintér
    I'm using Jetty (version 6.1.22) to service a Java web application. I would like to make Jetty look in two different folders for web resources. Take this layout: +- project1 | +- src | +- main | +- webapp | +- first.jsp | +- project2 +- src +- main +- webapp +- second.jsp I would like to make Jetty serve both URLs: http://localhost/web/first.jsp http://localhost/web/second.jsp I tried starting Jetty like this: Server server = new Server(); SocketConnector connector = new SocketConnector(); connector.setPort(80); server.setConnectors(new Connector[] { connector }); WebAppContext contextWeb1 = new WebAppContext(); contextWeb1.setContextPath("/web"); contextWeb1.setWar("project1/src/main/webapp"); server.addHandler(contextWeb1); WebAppContext contextWeb2 = new WebAppContext(); contextWeb2.setContextPath("/web"); contextWeb2.setWar("project2/src/main/webapp"); server.addHandler(contextWeb2); server.start(); But it only serves first.jsp, and it returns 404 for second.jsp. How can I get this to work?

    Read the article

  • Switch front-end's of a website after X amount of hits

    - by Derek Adair
    Sorry about the title - not sure what to call this one. A client of mine would like to redirect users to different front-ends of his eCommerce site based on a hit-counter (possibly a timer?). important: -The content is moderately different in the two sites, enough to consider them two different websites. Knowing this client he will likely add more drastic content changes and other front-ends. So for this question consider the content to be -This site has a rather large back-end. With affiliate networking, multiple payment gateways, order-tracking, and several other features in the works. It is essential that these two front-ends have identical back-end functionality I know that if it was just a simple CSS swap this would be as simple as an if statement that ran off some kind of counter stored in a DB... but the different HTML markup is throwing me for a loop. Q: How can I serve two different front-ends (HTML/CSS) based on a hit counter? Also, I don't have any clue what to tag this one as...

    Read the article

  • Using deprecated binders and C++0x lambdas

    - by Sumant
    C++0x has deprecated the use of old binders such as bind1st and bind2nd in favor of generic std::bind. C++0x lambdas bind nicely with std::bind but they don't bind with classic bind1st and bind2nd because by default lambdas don't have nested typedefs such as argument_type, first_argument_type, second_argument_type, and result_type. So I thought std::function can serve as a standard way to bind lambdas to the old binders because it exposes the necessary typedefs. However, using std::function is hard to use in this context because it forces you to spell out the function-type while instantiating it. auto bound = std::bind1st(std::function<int (int, int)>([](int i, int j){ return i < j; }), 10); // hard to use auto bound = std::bind1st(std::make_function([](int i, int j){ return i < j; }), 10); // nice to have but does not compile. I could not find a convenient object generator for std::function. Something like std::make_fuction would be nice to have. Does such a thing exist? If not, is there any other better way of binding lamdas to the classic binders?

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • As an Agile Java developer, what should I be looking for when hiring a C++ developer?

    - by agoudzwaard
    I come from an effective team of Agile Java developers. We've had a lot of success in hiring more people like ourselves - people passionate about technology with experience primarily in the Agile Java/J2EE space. We're looking to hire our first C++ developer to serve as an on-shore resource for maintaining and adding to the C++ portion of our code base. Up until now the entirety of our C++ development has been done out of an off-shore location. We consider our interview process to be fairly thorough: A phone screen centered on Object-Oriented Programming and Java A non-trivial at-home code project using Java An in-person interview covering technical and behavioral competency We look for a demonstration of Agile best practices (expressive code, test-driven development, continuous integration) throughout the entire process, however there is a common conception that Agility is primarily practiced by Java developers. If we retrofit our interview process for C++, should we still expect Agile qualities when interviewing for a C++ role? I'm asking on behalf of a team that has worked with Java too long to know what a good C++ developer looks like. Specifically we're looking to answer the following questions: Can we expect a demonstrated understanding of OO design and Separation of Concerns? In the code project we want the candidate to write unit tests. Would a good C++ developer be surprised by this expectation? Are there any "extra" competencies we can look for? For example with Java developers we always look for a familiarity with Dependency Injection.

    Read the article

  • Can I create an application using SlimDX relying on the DirectX DLLs already bundled with Vista/Win7

    - by norheim.se
    I'm working on a .NET application that requires the use of accelerated graphics, currently DirectX 9.0c. The software is quite graphics intensive and must, in addition, be launchable from a CD or by ClickOnce without the user requiring administrator's permissions. I currently use SlimDX, which works great featurewise, but the users are getting rather annoyed by having to install the DirectX redistributable. Especially since this does require elevated permissions. It is rather hard to explain to them why the version of DirectX already bundled with their OS is not sufficient. After all - DirectX 9.0c has been around since 2004, and I'm not using any new fancy features. The ability to deliver an application that "just works" in Vista or Windows 7, without any particular additional prerequisites would be a huge advantage. Therefore: Is there any way I can build an application using DirectX 9.0c based on SlimDX, that relies only on the libraries provided in the standard Windows Vista/Win7 installation? That is - without requiring the additional DirectX redistributable to be installed? If not - is there any other managed (and preferably not abandoned) DirectX wrapper that can serve this purpose? Thanks in advance!

    Read the article

  • How to Set up Virtual Static Subdomain

    - by Chip D
    Given current rewrite rules at http://www.example.com/: Options +FollowSymlinks +Includes RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/?(.*) http://www.example.com/$1 [L,R,NE] # Remove all "index.html"s. RewriteCond %{THE_REQUEST} \ /(.+/)?index\.html(\?.*)?\ [NC] RewriteRule ^(.+/)?index\.html$ /%1 [R=301,L] # Remove all "index.php"s. RewriteCond %{THE_REQUEST} \ /(.+/)?index\.php(\?.*)?\ [NC] RewriteRule ^(.+/)?index\.php$ /%1 [R=301,L] I'm attempting to serve some site assets (.png|.ico|.jpg|.gif|.css|.js) from a static subdomain like http://static.example.com which my Apache 1.3 shared host (GoDaddy) has mapped to a subdirectory for file management (http://www.example.com/static/). Currently these assets live in same-site subdirectories such as images/, js/, css/, etc. (1) Something...maybe the "+www" rewrite rule?...is not letting me access the subdomain. Is this likely caused by my rewrite rules above, or do I need to set up DNS changes with the host to enable access to the subdirectory, in addition to changing the rewrite rules? (2) Do I have to move those assets to that subdirectory and change all references sitewide? Would that perform the fastest? (3) Or can .htaccess make this much easier? (4) Should my first rewrite rule above include L,R=301,NE instead of [L,R,NE]?

    Read the article

  • Serving static media in django application

    - by Ed
    I notice that when I reference my java scripts and static image files from my templates, they show up in development, but not from the production server. From development, I access them as such: <img src="/my_proj/media/css/images/collapsed.png" /> but from production, I have to remove the project directory: <img src="/media/css/images/collapsed.png" /> I'm assuming I'm doing something wrong with regard to serving static media. I'm caught between a number of seemingly different options for serving static media in Django. On one hand, it's been recommended that I use django-staticfiles to serve media. On the other I see reference to STATIC_ROOT and STATIC_URL in the documentation (with caveats about use in production). I have small .png files of "plus" and "minus" symbols for use in some of my jQuery scripts. In addition, the scripts themselves need to be referenced. 1) Am I correctly categorizing scripts and site images as static media? 2) What is the best method to access this media (from production)?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >