Search Results

Search found 1793 results on 72 pages for 'effort'.

Page 58/72 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • SOAP web services in haskell?

    - by Dave
    I have to write a bunch of small web services. They must be defined by a WSDL and work via SOAP-RPC, in order to work with an existing workflow engine and service registry framework. I can, however, serve them on a service stack/platform of my choice. I'm presently writing them in Java, and it's not too bad. But I'm thinking my life might be easier if I was able to write these services in Haskell. Searching on Google, it looks like, once upon a time, someone else had the same idea and started a project called "HAIFA". However, it looks like HAIFA hasn't been maintained for some years, and I couldn't find any other frameworks supporting serving up services written in Haskell as SOAP web services. Does anyone know of any other frameworks that will allow me to easily write SOAP-based web services using Haskell? If not, has anyone done this manually (i.e., use XML libraries from hackage to process the incoming soap-rpc requests, and create soap-rpc compliant replies)? Was it difficult to do? Any gotchas? Was it worth the effort? Thanks in advance!

    Read the article

  • Replacing all GUIDs in a file with new GUIDs from the command line

    - by Josh Petrie
    I have a file containing a large number of occurrences of the string Guid="GUID HERE" (where GUID HERE is a unique GUID at each occurrence) and I want to replace every existing GUID with a new unique GUID. This is on a Windows development machine, so I can generate unique GUIDs with uuidgen.exe (which produces a GUID on stdout every time it is run). I have sed and such available (but no awk oddly enough). I am basically trying to figure out if it is possible (and if so, how) to use the output of a command-line program as the replacement text in a sed substitution expression so that I can make this replacement with a minimum of effort on my part. I don't need to use sed -- if there's another way to do it, such as some crazy vim-fu or some other program, that would work as well -- but I'd prefer solutions that utilize a minimal set of *nix programs since I'm not really on *nix machines. To be clear, if I have a file like this: etc etc Guid="A" etc etc Guid="B" I would like it to become this: etc etc Guid="C" etc etc Guid="D" where A, B, C, D are actual GUIDs, of course. (for example, I have seen xargs used for things similar to this, but it's not available on the machines I need this to run on, either. I could install it if it's really the only way, although I'd rather not)

    Read the article

  • Developing ASP.Net User Control to be imported to SharePoint MOSS 2007

    - by Don Kirkham
    Apologies if this has been answered, but I could not find a similar question: I am developing a webpart for MOSS 2007. I am using WSPBuilder to built a visual webpart (ascx) and everything works fine, but the development/debug cycle is just painfully slow, so I'd like to know if it is possible (without being too painful) to develop the user control faster using an .Net Web Application project with all of the nice F5 debugging, then import the final product into my SharePoint visual webpart. The user control interacts with a LOB system (SQL) and does not reference the SharePoint API at all. (The reason I am building this as a webpart is because I don't need another web app to run this one page, so putting it into a webpart on a new webpart page on my existing site is the best solution IMO.) I would obviously need to import (reference?) my data access classes into my "temp" web app, but think that would not be too much trouble. I realize this will be extra effort to get this set up, but am thinking the payoff will be reduced development time of the actual user control using a little web application vs having to use the compile/build WSP/deploy WSP/reset ISS/test/make a change/repeat cycle that MOSS requires. (I guess SP2010/VS2010 has spoiled me with the native SharePoint tools available.)

    Read the article

  • Serializing Configurations for a Dependency Injection / Inversion of Control

    - by Joshua Starner
    I've been researching Dependency Injection and Inversion of Control practices lately in an effort to improve the architecture of our application framework and I can't seem to find a good answer to this question. It's very likely that I have my terminology confused, mixed up, or that I'm just naive to the concept right now, so any links or clarification would be appreciated. Many examples of DI and IoC containers don't illustrate how the container will connect things together when you have a "library" of possible "plugins", or how to "serialize" a given configuration. (From what I've read about MEF, having multiple declarations of [Export] for the same type will not work if your object only requires 1 [Import]). Maybe that's a different pattern or I'm blinded by my current way of thinking. Here's some code for an example reference: public abstract class Engine { } public class FastEngine : Engine { } public class MediumEngine : Engine { } public class SlowEngine : Engine { } public class Car { public Car(Engine e) { engine = e; } private Engine engine; } This post talks about "Fine-grained context" where 2 instances of the same object need different implementations of the "Engine" class: http://stackoverflow.com/questions/2176833/ioc-resolve-vs-constructor-injection Is there a good framework that helps you configure or serialize a configuration to achieve something like this without hard coding it or hand-rolling the code to do this? public class Application { public void Go() { Car c1 = new Car(new FastEngine()); Car c2 = new Car(new SlowEngine()); } } Sample XML: <XML> <Cars> <Car name="c1" engine="FastEngine" /> <Car name="c2" engine="SlowEngine" /> </Cars> </XML>

    Read the article

  • Programmatically configure MATLAB

    - by Richie Cotton
    Since MathWorks release a new version of MATLAB every six months, it's a bit of hassle having to set up the latest version each time. What I'd like is an automatic way of configuring MATLAB, to save wasting time on administrative hassle. The sorts of things I usually do when I get a new version are: Add commonly used directories to the path. Create some toolbar shortcuts. Change some GUI preferences. The first task is easy to accomplish programmatically with addpath and savepath. The next two are not so simple. Details of shortcuts are stored in the file 'shortcuts.xml' in the folder given by prefdir. My best idea so far is to use one of the XML toolboxes in the MATLAB Central File Exchange to read in this file, add some shortcut details and write them back to file. This seems like quite a lot of effort, and that usually means I've missed an existing utility function. Is there an easier way of (programmatically) adding shortcuts? Changing the GUI preferences seems even trickier. preferences just opens the GUI preference editor (equivalent to File - Preferences); setpref doesn't seems to cover GUI options. The GUI preferences are stored in matlab.prf (again in prefdir); this time in traditional name=value config style. I could try overwriting values in this directly, but it isn't always clear what each line does, or how much the names differ between releases, or how broken MATLAB will be if this file contains dodgy values. I realise that this is a long shot, but are the contents of matlab.prf documented anywhere? Or is there a better way of configuring the GUI? For extra credit, how do you set up your copy of MATLAB? Are there any other tweaks I've missed, that it is possible to alter via a script?

    Read the article

  • Idiomatic PHP web page creation

    - by GreenMatt
    My PHP experience is rather limited. I've just inherited some stuff that looks odd to me, and I'd like to know if this is a standard way to do things. The page which shows up in the browser location (e.g. www.example.com/example_page) has something like: <? $title = "Page Title"; $meta = "Some metadata"; require("pageheader.inc"); ?> <!-- Content --> Then pageheader.inc has stuff like: <? @$title = ($title) ? $title : ""; @$meta = ($meta) ? $meta : ""; ?> <html> <head> <title><?=$title?></title </head> <!-- and so forth --> Maybe others find this style useful, but it confuses me. I suppose this could be a step toward a rudimentary content management system, but the way it works here I'd think it adds to the processing the server has to do without reducing the load on the web developer enough to make it worth the effort. So, is this a normal way to create pages with PHP? Or should I pull all this in favor of a better approach? Also, I know that "<?" (vs. "<?php" ) is undesirable; I'm just reproducing what is in the code.

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • CSS Clearing Floats

    - by Frank
    I'm making more of an effort to separate my html structure from presentation, but sometimes when I look at the complexity of the hacks or workarounds to make things work cross-browser, I'm amazed at huge collective waste of productive hours that are put into this. As I understand it, floats were never created for creating layouts, but because many layouts need a footer, that's how they're often being used. To clear the floats, you can add an empty div that clears both sides (div class="clear"). That is simple and works cross browser, but it adds "non-semantic" html rather than solving the presentation problem within the CSS. I realize this, but after looking at all of the solutions with their benefits and drawbacks, it seems to make more sense to go with the empty div (predictable behavior across browsers), rather than create separate stylesheets, including various css hacks and workarounds, etc. which would also need to change as CSS evolves. Is it o.k. to do this as long as you do understand what you're doing and why you're doing it? Or is it better to find the CSS workarounds, hacks and separate structure from presentation at all costs, even when the CSS presentation tools provided are not evolved to the point where they can handle such basic layout issues?

    Read the article

  • ecommerce platform or from scratch? customer specific catalogs and purchase orders

    - by rafi
    I have a possible freelance job in front of me for a distributor who wants product ordering set up but the orders are all P.O.s basically - no actual credit card or paypal transaction. The customer is simply billed and the order archived. Customers will need to login to this site and each customer will have their own custom catalog of a few dozen products which have been setup via a control panel this distributor uses. So there will be a master catalog of over 1,000 products (perhaps browsable but not to be ordered from on the site) but each customer will only be able to order from the products specified for their accounts. I know I can build this from scratch but I figured it's worth looking into what ecommerce platforms would get me a nice head start. Obviously shopping cart, order history, catalog management are concepts that I can reuse but are any of the ecommerce systems out there also capable of handling custom catalogs (maybe as multi-stores?) or transactions billed to accounts without credit card? The more I could reuse the better. I've messed with OSCommerce (way back) and a little Zen Cart more recently. I've also worked on a number of totally custom e-commerce sites. But my knowledge of the open source e-commerce tools is pretty limited and I'm trying to keep the effort as simple as I possibly can on this. I'm pretty flexible on the language of the platform by the way. Thanks in advance.

    Read the article

  • Are there any code critique sites or similar resources?

    - by Ukko
    I have noticed when people post example code illustrating some issue that they are having often they will gather a number of comments addressing the quality of the code they presented and not the actual problem asked. This is very helpful--if not well directed. Often, this is wasted effort since the asker is often not receptive and the code is often chopped down to something small to post leaving lots of rough edges. In the old days you would see people asking questions like this on comp.lang.lisp and other parts of the comp.lang hierarchy. But that bit of the net kind of sank into the sewers of neglect. Is there a comparable one-stop-shop today? I am partially asking for selfish reasons, I know how to write good idiomatic C, Lisp, O'Caml, and Java code. But I learned C++ pre-template and STL, those rusty skills are not really applicable to today's C++. I have picked up languages like Scala in a vacuum and get by, but am I really doing it correctly? There are so many ways you can abuse a language, I am currently working against a codebase of Fortran written in C, and I recognize and loathe the "that guy" who wrote it. I don't want to be someone else's "that guy" if I can help it. Just because it works does not mean that one did not totally miss the boat on how it should have been done. Do you seek out this type of critique? If so how, where and why? What types of benefits do you derive from it? How about abuse and trolls?

    Read the article

  • are C functions declared in <c____> headers gauranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); }

    Read the article

  • Using MATLAB's plotting features as an interactive part of a Fortran program

    - by ldigas
    Although many of you will have a decent idea of what I'm aiming at, just from reading the title -- allow me a simple introduction still. I have a Fortran program - it consists of a program, some internal subroutines, 7 modules with its own procedures, and ... uhmm, that's it. Without going into much detail, for I don't think it's necessary at this point, what would be the easiest way to use MATLAB's plotting features (mainly plot(x,y) with some customizations) as an interactive part of my program ? For now I'm using some of my own custom plotting routines (based on HPGL and Calcomp's routines), but just as part of an exercise on my part, I'd like to see where this could go and how would it work (is it even possible what I'm suggesting?). Also, how much effort would it take on my part ? I know this subject has been rather extensively described in many "tutorials" on the net, but for some reason I have trouble finding the really simple yet illustrative introductory ones. So if anyone can post an example or two, simple ones, I'd be really grateful. Or just take me by the hand and guide me through one working example. platform: IVF 11.something :) on Win XP SP2, Matlab 2008b

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • C# + Querying XML with LINQ

    - by user336786
    Hello, I'm learning to use LINQ. I have seen some videos online that have really impressed me. In an effort to learn LINQ myself, I decided to try to write a query to the NOAA web service. If you put "http://www.weather.gov/forecasts/xml/sample_products/browser_interface/ndfdBrowserClientByDay.php?zipCodeList=20001&format=24+hourly&startDate=2010-06-10&numDays=5" in your browser's address bar, you will see some XML. I have successfully retrieved that XML in a C# program. I am loading the XML into a LINQable entity by doing the following: string xml = QueryWeatherService(); XDocument weather = XDocument.Parse(xml); I have a class called DailyForecast defined as follows: public class DailyForecast { public float HighTemperature { get; set; } public float LowTemperature { get; set; } public float PrecipitationPossibility { get; set; } public string WeatherSummary { get; set; } } I'm trying write a LINQ query that adheres to the structure of my DailyForecast class. At this time, I've only gotten to this far: var results = from day in response.Descendants("parameters") select day; Not very far I know. Because of the structure of the XML returned, I'm not sure it is possible to solely use a LINQ query. I think the only way to do this is via a loop and traverse the XML. I'm seeking someone to correct me if I'm wrong. Can someone please tell me if I can get results using purely LINQ that adhere to the structure of the DailyForecast class? If so, how? Thank you!

    Read the article

  • Javascript libraries + JQuery plugins contradict? How to debug?

    - by Metafaniel
    This is somewhat a newbie question... I effort everyday to learn, so please understand ;) I'm not the very best expert, but I can do a decent job good looking and functional websites or web applications. My main tools are PHP5, HTML5, CSS2 y 3, a database (SQLite, MySQL) and Javascript and JQuery. I'm not an expert at all in Javascript. I often find interesting JQuery plugins or tutorials and try to mix them up to do the functionality needed. This time I'm mixing maybe too much plugins and js files from different sources. In fact, my app do what I want except for certain behaviors... There are no errors, everything looks fine, but the misbehavior persists. So maybe I need to specify a class I don't know about, or one contradicts another one from another plugin and I just can't understand, for example, why a <button type="button">DON'T submit</button> just submits the form... Anyway, my point is: Do you people know a way to debug this situations??? Is there a generic tool, suggestion, workflow or something to help me understand conflicts or omissions between libraries or plugins??? (Javascript libraries, my own Javascripts and JQuery plugins)??? I hope it is a way! THANKS A LOT FOR YOUR HELP AND COMPREHENSION! =)

    Read the article

  • JQuery: Current, Well-Formatted, Printable Documentation?

    - by Eli
    Hi All, I'm looking for a current (1.2), well-formatted, printable version of the jQuery documentation. I've checked the alternative resources page and see the PDF versions from CF and Java, but both are out of date. The jQuery site has the API browser with "Printable Version" in the toolbox, but it prints terribly, and I don't really want to print one page or tab at a time. I have a hard time believing that there is no print doc for a tool this popular - all I want is a simple listing with descriptions and examples ON PAPER. Am I missing something? I can buy one of the books if I need to, but not sure which is for the current version. Thanks! Update: I can see that somebody voted this down. I know it's a pretty basic question, but it is not asked lightly or frivolously. I have made a pretty solid effort to find this on my own, and am pretty good at finding information when I need it. Perhaps the person who thought the question not worth asking knows where to find the print doc?

    Read the article

  • C: Fifo between threads, writing and reading strings

    - by Yonatan
    Hello once more dear internet, I writing a small program that among other things, writes a log file of commands received. to do that, I want to use a thread that all it should do is just attempt to read from a pipe, while the main thread will write into that pipe whenever it should. Since i don't know the length of each string command, i thought about writing and reading the pointer to the char buf[MAX_MESSAGE_LEN]. Since what i've tried so far doesn't work, i'll post my best effort :P char str[] = "hello log thread 123456789 10 11 12 13 14 15 16 17 18 19\n"; if (pipe(pipe_fd) != 0) return -1; pthread_t log_thread; pthread_create(&log_thread,NULL, log_thread_start, argv[2]); success_write = 0; do { write(pipe_fd[1],(void*)&str,sizeof(char*)); } while (success_write < sizeof(char*)); and the thread does this: char buffer[MAX_MSGLEN]; int success_read; success_read = 0; //while(1) { do { success_read += read(pipe_fd[0],(void*)&buffer, sizeof(char*)); } while (success_read < sizeof(char*)); //} printf("%s",buffer); (Sorry if this doesn't indent, i can't seem to figure out this editor...) oh, and pipe_fd[2] is a global parameter. So, any help with this, either by the way i thought of, or another way i could read strings without knowing the length, would be much appreciated. On a side note, i'm working on Eclipse IDE C/C++, version 1.2.1 and i can't seem to set up the compiler so it will link the pthread library to my project. I've resorted to writing my own Makefile to make it (pun intended :P) work. Anyone knows what to do ? i've looked online, but all i find are solutions that are probably good on an older version because the tabs and option keys are different. Anyways, Thanks a bunch internet ! Yonatan

    Read the article

  • Multiple generic types in one container

    - by Lirik
    I was looking at the answer of this question regarding multiple generic types in one container and I can't really get it to work: the properties of the Metadata class are not visible, since the abstract class doesn't have them. Here is a slightly modified version of the code in the original question: public abstract class Metadata { } public class Metadata<T> : Metadata { // ... some other meta data public T Function{ get; set; } } List<Metadata> metadataObjects; metadataObjects.Add(new Metadata<Func<double,double>>()); metadataObjects.Add(new Metadata<Func<int,double>>()); metadataObjects.Add(new Metadata<Func<double,int>>()); foreach( Metadata md in metadataObjects) { var tmp = md.Function; // <-- Error: does not contain a definition for Function } The exact error is: error CS1061: 'Metadata' does not contain a definition for 'Function' and no extension method 'Function' accepting a first argument of type 'Metadata' could be found (are you missing a using directive or an assembly reference?) I believe it's because the abstract class does not define the property Function, thus the whole effort is completely useless. Is there a way that we can get the properties?

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • cannot eliminate space between 2 horizontal divs inside containing div

    - by wantTheBest
    Should be easy, right? Just set the outer containing div's padding to zero, and set the two side-by-side divs inside the outer div to have margin:0 but that's having no effect on the space between the 2 horizontal divs. What I need is the red-outlined left div to touch the green-outlined right-side div. Despite my effort using padding and margin, the space between the 2 divs will not go away. I have looked at many answers on SO but so far no one's broken it down to this simple example -- my fiddle shows this issue, at http://jsfiddle.net/Shomer/tLZrm/7/ And here is the very simple code: <div style="border: 4px solid blue; white-space:nowrap; margin:0; padding:0; width:80%"> <div style="display:inline-block; width:45%; overflow:hidden; margin:0; border: 1px solid red"> Flimmy-flammy </div> <div style="display:inline-block; width:50%; overflow:hidden; margin:0px; border: 1px solid green"> Hambone-Sammy </div> </div>

    Read the article

  • Develop multiple very similar projects at once

    - by Raveren
    I am developing a semi-complicated site that is available in several countries at once. Much effort has been put in to make the code bases as similar as possible to one another and ultimately only the config file and some representational data will differ between them. Each project has its own SVN repository which maps directly to a live test site. That part is handled by the IDE we use to work. Now I am in need to create a some sort of system to keep all these projects in sync. The best theoretical solution so far is to create a local hook script that would fire on committing and Merge the committed files from the project that is being committed to all other projects Optionally upload them to the live site, replacing previous files The first problem is that I don't know how I would do the merging - I guess it would be like applying a SVN patch or something. The second is if I do not want to upload the changes to the live server, how would I go about synching the live and local code bases (replace older files?). I am posting this question, not going through the potentially huge trouble of solving the aforementioned problems myself is that I believe this is a pretty common situation and someone would already have a solution and others may benefit from the answers in the future. Lastly, I'm on windows7, develop PHP and use tortoiseSVN.

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • compact XSLT code to drop N number of tags if all are null.

    - by infant programmer
    This is my input xml: <root> <node1/> <node2/> <node3/> <node4/> <othertags/> </root> The output must be: <root> <othertags/> </root> if any of the 4 nodes isn't null then none of the tags must be dropped. example: <root> <node1/> <node2/> <node3/> <node4>sample_text</node4> <othertags/> </root> Then the output must be same as input xml. <root> <node1/> <node2/> <node3/> <node4>sample_text</node4> <othertags/> </root> This is the XSL code I have designed :: <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="/root/node1[.='' and ../node2/.='' and ../node3/.='' and ../node4/.=''] |/root/node2[.='' and ../node1/.='' and ../node3/.='' and ../node4/.=''] |/root/node3[.='' and ../node1/.='' and ../node2/.='' and ../node4/.=''] |/root/node4[.='' and ../node1/.='' and ../node2/.='' and ../node3/.='']"/> As you can see the code requires more effort and becomes more bulky as the number of nodes increase. Is there any alternative way to overcome this bottleneck?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >