Search Results

Search found 7104 results on 285 pages for 'dynamic usercontrols'.

Page 38/285 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • DonXml does WCF in NYC

    - by gsusx
    Tomorrow is WCF day in New York city!!!!! My good friend and Tellago's CTO Don Demsak will be doing a session WCF Data and RIA Services at the WCF fire-starter event to be hosted at the Microsoft offices in New York city. Don has a encyclopedic knowledge of both technologies and will be sharing lots of best practices learned from applying these technologies in large service oriented environments. In addition to Don, my crazy Cuban friend Miguel Castro will also be presenting three sessions at the...(read more)

    Read the article

  • Two domains, two servers, one dynamic IP address

    - by giantman
    I have two domains hi.org and bye.net and one dynamic IP address and two servers. I want to attach one domain bye.net to server1 and hi.org to server2. I'm using Apache wamp 2.0i. I have two servers behind one router with a dynamic IP address #httpd.conf file additions <IfModule mod_proxy.c> ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> </IfModule> #vhost file additions NameVirtualHost *:80 #default <VirtualHost *:80> DocumentRoot "c:/wamp/www/fallback" </VirtualHost> # Server 1 <VirtualHost *:80> DocumentRoot "c:/wamp/www" ServerName h**p://bye.net ServerAlias bye.net </VirtualHost> # Server 2 <VirtualHost *:80> ProxyPreserveHost On ProxyPass / h**p://192.168.1.119/ DocumentRoot "g:/wamp/www" ServerName h**p://hi.org ServerAlias hi.org </VirtualHost> After doing all this I fallback to server1 only I don't get the page hi.org I only get the page bye.net, I don't even get the default fallback page which gets executed when a person enters IP address but not the domain name. I use Windows 7 (server 2) and Windows XP (server 1) UPDATE: I needed to remove DocumentRoot "g:/wamp/www" line :D it was there by mistake! things are working fine now. But one thing: the URL gets replaced by the local ip address any way to not make that happen?

    Read the article

  • C# 4.0 in a Nutshell, Fourth Edition

    - by outcoldman
    Just became a lucky owner of this book C# IN A NUTSHELL 4th edition. This is a fourth edition of this book’s series. I saw previous third edition of this book, we presented it on one of our events at Yaroslavl State University, but that book was a Russian translated version and published in Russia, this is was bad side of that book – all books at Russia printed on really bad paper. I should say that I didn’t read this book by end, but already I was surprised. Why? Why I heard a lot about Richter CLR via C# (English version of 3rd edition of this book I already have, and this book are waiting my attention), and just a few words about C# IN A NUTSHELL, at least in my sphere. I just listen once about this book at one of the podcast of Alt.Net group, and this words was Richter it is really good book, and C# IN A NUTSHELL it is a good handbook. My opinion is - you should read Richter if you want to develop with .NET. But if you want to develop on .NET with C# you should read C# IN A NUTSHELL too. Read more...

    Read the article

  • SQL Server 2012 : Changes to system objects in RC0

    - by AaronBertrand
    As with every new major milestone, one of the first things I do is check out what has changed under the covers. Since RC0 was released yesterday, I've been poking around at some of the DMV and other system changes. Here is what I have noticed: New objects in RC0 that weren't in CTP3 Quick summary: We see a bunch of new aggregates for use with geography and geometry. I've stayed away from that area of programming so I'm not going to dig into them. There is a new extended procedure called sp_showmemo_xml....(read more)

    Read the article

  • Tellago Devlabs: A RESTful API for BizTalk Server Business Rules

    - by gsusx
    Tellago DevLabs keeps growing as the primary example of our commitment to open source! Today, we are very happy to announce the availability of the BizTalk Business Rules Data Service API which extends our existing BizTalk Data Services solution with an OData API for the BizTalk Server Business Rules engine. Tellago’s Vishal Mody led the implementation of this version of the API with some input from other members of our technical staff. The motivation The fundamental motivation behind the BRE Data...(read more)

    Read the article

  • Dynamic virtual host configuration in Apache

    - by Kostas Andrianopoulos
    I want to make a virtual host in Apache with dynamic configuration for my websites. For example something like this would be perfect. <VirtualHost *:80> AssignUserId $domain webspaces ServerName $subdomain.$domain.$tld ServerAdmin admin@$domain.$tld DocumentRoot "/home/webspaces/$domain.$tld/subdomains/$subdomain" <Directory "/home/webspaces/$domain.$tld/subdomains/$subdomain"> .... </Directory> php_admin_value open_basedir "/tmp/:/usr/share/pear/:/home/webspaces/$domain.$tld/subdomains/$subdomain" </VirtualHost> $subdomain, $domain, $tld would be extracted from the HTTP_HOST variable using regex at request time. No more loads of configuration, no more apache reloading every x minutes, no more stupid logic. Notice that I use mpm-itk (AssignUserId directive) so each virtual host runs as a different user. I do not intend to change this part. Since now I have tried: - mod_vhost_alias but this allows dynamic configuration of only the document root. - mod_macro but this still requires the arguments of the vhost to be declared explicitly for each vhost. - I have read about mod_vhs and other modules which store configuration in a SQL or LDAP server which is not acceptable as there is no need for configuration! Those 3 necessary arguments can be generated at runtime. - I have seen some Perl suggestions like this, but as the author states $s->add_config would add a directive after every request, thus leading to a memory leak, and $r->add_config seems not to be a feasible solution.

    Read the article

  • Learning to implement dynamically typed language compiler

    - by TriArc
    I'm interested in learning how to create a compiler for a dynamically typed language. Most compiler books, college courses and articles/tutorials I've come across are specifically for statically typed languages. I've thought of a few ways to do it, but I'd like to know how it's usually done. I know type inferencing is a pretty common strategy, but what about others? Where can I find out more about how to create a dynamically typed language? Edit 1: I meant dynamically typed. Sorry about the confusion. I've written toy compilers for statically typed languages and written some interpreters for dynamically typed languages. Now, I'm interested in learning more about creating compilers for a dynamically typed language. I'm specifically experimenting with LLVM and since I need to specify the type of every method and argument, I'm thinking of ways to implement a dynamically typed language on something like LLVM.

    Read the article

  • unit testing variable state explicit tests in dynamically typed languages

    - by kris welsh
    I have heard that a desirable quality of unit tests is that they test for each scenario independently. I realised whilst writing tests today that when you compare a variable with another value in a statement like: assertEquals("foo", otherObject.stringFoo); You are really testing three things: The variable you are testing exists and is within scope. The variable you are testing is the expected type. The variable you are testing's value is what you expect it to be. Which to me raises the question of whether you should test for each of these implicitly so that a test fail would occur on the specific line that tests for that problem: assertTrue(stringFoo); assertTrue(stringFoo.typeOf() == "String"); assertEquals("foo", otherObject.stringFoo); For example if the variable was an integer instead of a string the test case failure would be on line 2 which would give you more feedback on what went wrong. Should you test for this kind of thing explicitly or am i overthinking this?

    Read the article

  • Winamp question: Generating 'dynamic playlists' from file playlists -OR- mass-tagging by file playli

    - by Daddy Warbox
    I'm trying to think of a way to do this. I sort my songs into a variety of playlists corresponding to different 'moods' I might have as I listen to them, and some songs fit for more than one kind of mood (e.g. a jazz song might be 'stylish' and 'emotional', or something to that effect). I also give them star ratings for a general sort of opinion about them. I want to be able to filter and sort my media library by the moods I want or don't want, as well as by star rating. Anyone have a good way to do something like this? I can't seem to use Winamp's dynamic playlists to generate lists from existing filesystem playlists (e.g. songs in a given .m3u files). Hand-tagging files with Winamp's tag editor is a royal pain. It's trouble enough just giving a star rating and sorting into playlists as is. If there is there a way to mass tag songs within each playlist with mood words to allow me to create dynamic playlists, I'd be fine (for now). It'd be nice if I could do this via some kind of hotkey for each song, too. I'm looking to see if I can use a macro program or something to do that, though. Thanks in advance. P.S: Alternatively, would something like Foobar have functions like this? Note: Italics are recent edits.

    Read the article

  • C++ difference between "char *" and "char * = new char[]"

    - by nashmaniac
    So, if I want to declare an array of characters I can go this way char a[2]; char * a ; char * a = new char[2]; Ignoring the first declaration, the other two use pointers. As far as I know the third declaration is stored in heap and is freed using the delete operator . does the second declaration also hold the array in heap ? Does it mean that if something is stored in heap and not freed can be used anywhere in a file like a variable with file linkage ? I tried both third and second declaration in one function and then using the variable in another but it didn't work, why ? Are there any other differences between the second and third declarations ?

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • C++ simple arrays and pointers question

    - by nashmaniac
    So here's the confusion, let's say I declare an array of characters char name[3] = "Sam"; and then I declare another array but this time using pointers char * name = "Sam"; What's the difference between the two? I mean they work the same way in a program. Also how does the latter store the size of the stuff that someone puts in it, in this case 3 characters? Also how is it different from char * name = new char[3]; If those three are different where should they be used I mean in what circumstances?

    Read the article

  • What should developers know about Windows executable binary file compression?

    - by Peter Turner
    I'd never heard of this before, so shame on me, but programs like UPX can compress my files by 80% which is totally sweet, but I have no idea what the the disadvantages are in doing this. Or even what the compressor does. Website linked above doesn't say anything about dynamically linking DLLs but it mentions about compressing DESCENT 2 and about compressing Netscape 4.06. Also, it doesn't say what the tradeoffs are, only the benefits. If there weren't tradeoffs why wouldn't my linker compress the file? If I have an environment where I have one executable and 20-30 DLL's, some of which are dynamically loaded an unloaded fairly arbitrarily, but not in loops (hopefully), do I take a big hit in processing time decompressing these DLL's when they're used?

    Read the article

  • How can I convert a 2D bitmap (Used for terrain) to a 2D polygon mesh for collision?

    - by Megadanxzero
    So I'm making an artillery type game, sort of similar to Worms with all the usual stuff like destructible terrain etc... and while I could use per-pixel collision that doesn't give me collision normals or anything like that. Converting it all to a mesh would also mean I could use an existing physics library, which would be better than anything I can make by myself. I've seen people mention doing this by using Marching Squares to get contours in the bitmap, but I can't find anything which mentions how to turn these into a mesh (Unless it refers to a 3D mesh with contour lines defining different heights, which is NOT what I want). At the moment I can get a basic Marching Squares contour which looks something like this (Where the grid-like lines in the background would be the Marching Squares 'cells'): That needs to be interpolated to get a smoother, more accurate result but that's the general idea. I had a couple ideas for how to turn this into a mesh, but many of them wouldn't work in certain cases, and the one which I thought would work perfectly has turned out to be very slow and I've not even finished it yet! Ideally I'd like whatever I end up using to be fast enough to do every frame for cases such as rapidly-firing weapons, or digging tools. I'm thinking there must be some kind of existing algorithm/technique for turning something like this into a mesh, but I can't seem to find anything. I've looked at some things like Delaunay Triangulation, but as far as I can tell that won't correctly handle concave shapes like the above example, and also wouldn't account for holes within the terrain. I'll go through the technique I came up with for comparison and I guess I'll see if anyone has a better idea. First of all interpolate the Marching Squares contour lines, creating vertices from the line ends, and getting vertices where lines cross cell edges (Important). Then, for each cell containing vertices create polygons by using 2 vertices, and a cell corner as the 3rd vertex (Probably the closest corner). Do this for each cell and I think you should have a mesh which accurately represents the original bitmap (Though there will only be polygons at the edges of the bitmap, and large filled in areas in between will be empty). The only problem with this is that it involves lopping through every pixel once for the initial Marching Squares, then looping through every cell (image height + 1 x image width + 1) at least twice, which ends up being really slow for any decently sized image...

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • Need to make animation whereby the character shatters into a bunch of pieces

    - by theprojectabot
    I would like to take a 3d character model, cut out a bunch of shapes (or a bunch of triangles in the shape of the pieces I want) and then have the pieces separate from each other at the beginning of the animation and fall apart with gravity so it looks like the model is falling apart in shattered pieces. Is there a way to run a script on a mesh, cut out these pieces, instantiate all of them as separate models and then run gravity on them during the simulation?

    Read the article

  • SQL Strings vs. Conditional SQL Statements

    - by Yatrix
    Is there an advantage to piecemealing sql strings together vs conditional sql statements in SQL Server itself? I have only about 10 months of SQL experience, so I could be speaking out of pure ignorance here. Where I work, I see people building entire queries in strings and concatenating strings together depending on conditions. For example: Set @sql = 'Select column1, column2 from Table 1 ' If SomeCondtion @sql = @sql + 'where column3 = ' + @param1 else @sql = @sql + 'where column4 = ' + @param2 That's a real simple example, but what I'm seeing here is multiple joins and huge queries built from strings and then executed. Some of them even write out what's basically a function to execute, including Declare statements, variables, etc. Is there an advantage to doing it this way when you could do it with just conditions in the sql itself? To me, it seems a lot harder to debug, change and even write vs adding cases, if-elses or additional where parameters to branch the query.

    Read the article

  • Dynamically vs Statically typed languages studies

    - by Winston Ewert
    Do there exist studies done on the effectiveness of statically vs dynamically typed languages? In particular: Measurements of programmer productivity Defect Rate Also including the effects of whether or not unit testing is employed. I've seen lots of discussion of the merits of either side but I'm wondering whether anyone has done a study on it. Edit Sadly, only one of the papers shown is actually a study and it does nothing but conclude that the language matters. This leads me to ponder: what if I proposed doing such a study with volunteers from this site?

    Read the article

  • Dynamic DNS Updates with Wireless and Wired interfaces

    - by Phaedrus
    We have offices full of Windows & Mac users who obtain IP addresses from a Windows DHCP server, which in turn updates Dynamic DNS entries. We are noticing major inconsistencies with the entries, and have found that the problem is occurring more on Macs than on windows, and even more when users are frequently switching from wired to wireless adapter, which makes sense, as this sequence occurs: User enables wired adapter and registers Proper DNS User enables wireless adapter and registers 2nd proper DNS entry user switches off wireless manually and 2nd entry remains improperly until scavenge. Our help desk folks rely heavily (maybe more than they should) on the dynamic entries as part of their business process. For example, the user submits a help desk ticket, and the staff member expects to be able to remote desktop to their machine by hostname, which is hyperlinked in the helpdesk ticketing app. We have implemented multiple solutions & band-aids to different symptoms of the problems such as: Using DNS Reservations for Macintosh PCs Using DNS Scavenging to remove old records Switching from a Cisco DHCP server to the Windows DHCP Server But no matter what we do, it seems impossible to maintain perfect records. Has anyone encountered this problem before? What is industry best practice? Comments & Suggestions are much appreciated, /P

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • does unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. now I'm wondering does it have a noticeable affect in fps if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? edit final output will be a w*h cell grid, but for technical issues it's much more easier for me to allocate (w+1)*(h+1) vertices. sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect fps or not? (note that mesh is only generated once in each time you play the game)

    Read the article

  • PropertyGrid: Merging multiple dynamic properties when editing multiple objects

    - by Andrei Stanescu
    Hi, Let's say I have a class A and a class B. I would like to edit using .NET PropertyGrid multiple instances of A and B simultaneously. The desired behavior would be to have the intersection of properties displayed. If A and B have static (written in the source code) properties everything works fine. Selecting A and B instances will only display the intersection of properties. However, if A and B also have dynamic properties (returned as a PropertyDescriptorCollection through the GetProperties() method) the behavior is wrong. When selecting multiple objects I will only see those static properties and none of the dynamic ones. When I select only one instance I can see all properties (static and dynamic). Anybody any ideas? I couldn't find anything on the internet.

    Read the article

  • Algorithm and data structure learning resources for dynamic programming

    - by Pranav
    Im learning dynamic programming now, and while I know the theory well, designing DP algorithms for new problems is still difficult. This is what i would really like now- A book or a website, which poses a problem which can be solved by dynamic programming. Also there is the solution with an explanation available, which i would like to see if i cant solve the problem even after butting my head at it for a few hours. Is there some resource that provides this sort of a thing for several categories of algorithms- like graph algorithms, dynamic programming, etc? P.S. I considered Topcoder, but the solutions there are not really appropriate for learning to implement efficient solutions.

    Read the article

  • How to make schema and code dynamic?

    - by Jonarch
    I want to make my database schema and application code as dynamic as possible to handle "unknown" use cases and changes. Developing in PHP and MySQL. Twice now I have had to change my entire schema including table and column names and this means the developers have to go back to the application code and modify all the SQL queries and table/columns names. So to prevent this I want to if just like we do on pages where we have page content, title bar etc dynamic like a %variable%, can we do it for the schema and maybe even for the php code functions and classes somehow? It takes weeks to re-do all changes like this vs if it is dynamic it can be done in under a day.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >