Search Results

Search found 1650 results on 66 pages for 'independent contractor'.

Page 39/66 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Is the MySQL FOSS License Exception transitive - does it remove the GPL restrictions for downstream

    - by Eric
    I'm looking at building a MySQL client plugin for a proprietary product, which would violate the GPL as discussed in the FAQ at http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins However, according to the MySQL FOSS License Exception ("FLE"), discussed at http://www.mysql.com/about/legal/licensing/foss-exception/, you can license an open-source product built with the client with many alternatives. The oursql library (https://launchpad.net/oursql) is BSD-licensed. Is this a valid way around the GPL? By my reading of the FLE, the only clause that refers to downstream uses of derived works is section 2.e: All works that are aggregated with the Program or the Derivative Work on a medium or volume of storage are not derivative works of the Program, Derivative Work or FOSS Application, and must reasonably be considered independent and separate works. This is the case for our product: it is not a derivative work of oursql, and in fact accesses it only via a plugin-driven interface. So is this a valid loophole?

    Read the article

  • Synchronising scripts / db / files from dev system to web server

    - by Spoonface
    I work as a freelance web dev, and up until now have been ftping my scripts / databases / static files to my web server manually, but I'm finding that is too error prone. So I'm looking for an app to automate uploading new and updated scripts / files / databases / etc. I know a lot of independent devs use WinSCP or Unison, but I don't think those apps can synch databases. Does anyone have any other suggestions? It doesn't need to be anything overly feature rich as I'm not working within a team or across multiple operating systems or anything like that. I can purchase any reasonably priced license if necesary. My work is primarily for PHP / MySQL / Apache on a Windows system, and then uploaded to a Linux / Apache server. thanks for your time!

    Read the article

  • Catching constraint violations in JPA 2.0.

    - by Dennetik
    Consider the following entity class, used with, for example, EclipseLink 2.0.2 - where the link attribute is not the primary key, but unique nontheless. @Entity public class Profile { @Id private Long id; @Column(unique = true) private String link; // Some more attributes and getter and setter methods } When I insert records with a duplicate value for the link attribute, EclipseLink does not throw a EntityExistsException, but throws a DatabaseException, with the message explaining that the unique constraint was violated. This doesn't seem very usefull, as there would not be a simple, database independent, way to catch this exception. What would be the advised way to deal with this? A few things that I have considered are: Checking the error code on the DatabaseException - I fear that this error code, though, is the native error code for the database; Checking the existence of a Profile with the specific value for link beforehand - this obviously would result in an enormous amount of superfluous queries.

    Read the article

  • Distributed, persistent cache using EHCache

    - by Richard
    I currently have a distributed cache using EHCache via RMI that works just fine. I was wondering if you can include persistence with the caches to create a distributed, persistent cache. Alongside this, if the cache was persistent, would it load from the file store, then bootstrap from the cache cluster? Basically, what I want is: Cache starts Cache loads persistent objects from the file store Cache joins the distruted cluster and bootstraps as normal The usecase behind this is having 2 identical components running on independent machines, distributing the cache to avoid losing data in the event that one of the components fails. The persistence would guard against losing all data on the rare occasion that both components fail. Would moving to another distribution method (such as Terracotta) support this?

    Read the article

  • Best way to handle input from a keyboard "wedge"

    - by Mykroft
    I'm writing a C# POS (point of sale) system that takes input from a keyboard wedge magcard reader. This means that any data it reads off of a mag stripe is entered as if it were typed on the keyboard very quickly. Currently I'm handling this by attaching to the KeyPress event and looking for a series of very fast key presses that contain the card swipe sentinel characters. Is there a better way to deal with this sort of input? Edit: The device does simply present the data as keystrokes and doesn't interface through some other driver. Also We use a wide range of these types of devices so ideally a method should work independent of the specific model of wedge being used. However if there is no other option I'll have to make do.

    Read the article

  • How to generate one large dependency map for the whole project that builds with makefiles?

    - by Stan
    I have a gigantic project that is built using makefiles. Running make at the root of the project takes over 20 minutes when no files have changed (i.e. just traversing the project and checking for updated files). I'd like to create a dependency map that will tell me which directories I need to run 'make' in based on the file(s) changed. I already have a list of updated files that I can get from my version control system, and I'd like to skip the 20 minutes of traversing and get straight to the locations that do need to be recompiled. The project has a mix of several languages and custom tools, so this would ideally be language-independent (i.e. it would only process all makefiles to generate dependencies). I'll settle for a C/C++-specific solution, too, as the majority of the project is in C++. The project is built on Linux.

    Read the article

  • Can't find a Wordpress image/photo gallery plugin

    - by mgroves
    I've been looking at Wordpress plugins for photo galleries (so maybe this is for superuser.com), and I've been very frustrated so far. It seems like what I'd like to do would be a very common use case: Admin: Be able to upload multiple pictures (at a time) Admin: Be able to assign a "gallery" to those pictures as I upload them User: Be able to go to a page with a (paged) list of all galleries User: Be able to click on gallery and view the images (again, probably paged) in that gallery User: Be able to click on an image to get larger/largest sizes User: Be able to leave comments on individual pictures (this is a "nice to have") The images/galleries could be totally independent of posts/pages, but it would be nice to be able to embed those images/galleries into posts/pages when necessary. Is there anything out there like this that I'm missing? I've tried a handful of plugins and none of them seem to be for a use case anywhere close to what I'm looking for. One of the reasons I'm trying to use Wordpress is to reduce time spent coding everything I want.

    Read the article

  • Best implementation for MySQL replication with Rails 3?

    - by vonconrad
    We're looking at potentially setting up replication for our primary MySQL database, and while setting up the replication seems pretty straight-forward, the application implementation seems a bit murkier. My first idea would be to set up a master-slave configuration and RW-splitting, with all write queries (CREATE, INSERT, UPDATE) going to master, and all read queries (SELECT) going to slave. Having read up on it, it seems that there are essentially two options for how to implement this with our app: Using an independent middleware layer for all MySQL connections, such as MySQL proxy or DBSlayer. However, the former is in Alpha and the latter has limited documentation. Using a Ruby-based gem/plugin, such as Octopus to achieve RW-splitting in the framework. If we wanted to go with a master-slave setup, what you recommend moving forward? The other thought I've had was to use a master-master configuration, but am unsure about the implementation of such a setup. Thoughts?

    Read the article

  • Seeking References To MSVC 9.0's C++ Standards Compliance

    - by John Dibling
    I "know" (hopefully) that MSVC 9.0 Implements C++ 2003 (ISO/IEC 14882:2003). I am looking for a reference to this fact, and I am also looking for any research that has been done in to how compliant MSVC 9.0 is with that version of the Standard. I have searched for and not been able to find a specific reference from MicroSoft that actually says something to the effect that MSVC implements C++ 2003. Some of the out-of-date documentation says things like "this release achieves roughly 98% compliance" (when referring to MSVC .NET 2003's conformance to C++ 1997). But I want a link to a document from MS that says "MSVC 9.0 implements blah," and another link to an independent group that has tested the conformance of MSVC 9.0. Do you know of any such links?

    Read the article

  • Does a .NET Framework installation interfere with existing VB6 runtime or COM installations?

    - by faredoon
    Consider this situation: There's a critical VB6 desktop application running on a production box. There is a possibility of installing a .NET application that queries the same DB that the VB6 application queries, which is an SQL Server 2000 DB. The VB6 application also depends on third-party ActiveX controls (registered .ocx files). The concern is - will the .NET Framework installation replace any files or break the VB6 runtime in any way. In other words, can we safely assume that an installation of the .NET Framework is completely independent of any previous VB6 installations and will not interfere with the running application?

    Read the article

  • NoClassDefFoundError points to the wrong class.

    - by Sora
    I'm validating the installation of a program that consists of a few separate modules. They are not co-dependent. I have apple.jar and orange.jar, they are placed in the same folder and were developed in the same project, but run independent of each other. Running apple.jar goes fine, but orange.jar gives me a NoClassDefFoundError pointing to apple.jar. /usr/java/jre1.6.0_14/bin/java -jar validator.jar Exception in thread "main" java.lang.NoClassDefFoundError: orange/client/Apple Caused by: java.lang.ClassNotFoundException: orange.client.Apple at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) Could not find the main class: validator/client/StormDataXMLGenerator. Program will exit. The manifest file lists Orange as the correct main class. Main-Class: orange/client/Orange Anybody know why it's giving me the NoClassDefFoundError? Thanks in advance!

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • How should I structure my git commits?

    - by int3
    I'm trying to contribute to open source software for the first time, but I'm pretty inexperienced with version control systems. In particular, right now I want to make a number of changes to different parts of the code, but I'm not sure if the maintainer would want to integrate all of them into the master repository. However, the changes I'll be making are independent, i.e. they affect different parts of the file, or parts of different files. How should I go about making the changes? If I make a string of commits on the same branch, will the maintainer be able to pick and choose what he wants from the individual commit? E.g. can he patch in the changes I made in my second commit while ignoring the first one? Or should I make each change in a separate branch?

    Read the article

  • Database Modelling - Conceptually different entities but with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has MarketPriceDataSetId (foreign key to parent table) StartDate EndDate SimulationItemId ForwardPrice PoolPriceForecastEntry has PoolPriceForecastDataSetId (foreign key to parent table) StartDate EndDate SimulationItemId ForecastPoolPrice If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Navigating back to "Home" from an area (MVC2)

    - by SlackerCoder
    I have a few areas in my application that are relatively independent (all navigated to from the master page). So, as of now, I am simply using the "default" MVC2 template (the one you get when you create a new MVC2 project). So the menu looks like this: HOME AREA1 AREA2 AREA3 AREA4 .... ABOUT Now, when I first load up the page, I am on the "HOME", and I can click on the ABOUT without issue. I can navigate to any of the areas as well, however, once I navigate to an area page, I cannot get back to my home or about pages (404 not found). When I navigate to them, and click on about, the address bar shows .../AreaX/Home/Home instead of Home/Home as I would expect. I expect that is has something to do with my routing, but Im not completely sure. I have added/changed nothing with the default routing (which is probably the issue!). Any thoughts?

    Read the article

  • OpenGL: Textured Primitives + High Framerate

    - by James D
    Short version: What's the best practice going forward for efficiently rendering large numbers of independent texture-mapped, lighted 2D/3D primitives (circles, rects, etc.) in OpenGL? For example: a typical particle system using billboarded quads/triangles, point sprites, or whatever other technique, with blending. Because after reading this thread on the messiness of OpenGL versioning/deprecation I'm starting to have my doubts. My specific question is not the ABCs of displaying primitives in OpenGL, but rather how to do so efficiently in post-deprecation (or pre-deprecation) OpenGL, in a way that's going to be compatible with a wide range of commodity hardware and in a way that's not going to break or itself get deprecated, five years down the line. Thanks!

    Read the article

  • Partial Git deployment strategy?

    - by MatW
    I need to setup a Kohana dev environment that allows me to make full use of shared module / system classes across separate applications. Each application typically belonging to a different client. I use Git for source control, but am struggling to come up with a clean deployment method that will allow me to pull only those parts of the dev environment specific to a client / app down into that client's production environment (assuming that the client's production environment will have Git installed). Dev enviroment: - kohana - applications - clientapp1 - clientapp2 - modules - public_html - clientapp1 - clientapp2 - system - 3.0.1 - 3.0.5 Client 1's production environment: - / - applications - clientapp1 - modules - public_html - client_app1 - system - 3.0.5 Naturally, I want to have total control over each client "sub repo" as if it were an independent repo (in terms of gitignore, etc). I have seen topics that cover Git's sparse checkout feature, but it seems like it may cause a few problems down the line from a maintenance point of view, and I don't like the idea of the entire repo's metadata existing in client's production environment repo. As you can probably tell, I'm not exactly a Git poweruser, so any suggestions / wisdom are very welcome!

    Read the article

  • Screening (multi)collinearity in a regression model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • how to add a meta tag to the page template from Wordpress plugin?

    - by detj
    I want to add a meta tag like this one: <meta name="key" content="value" /> to some of the pages in Wordpress. I know, I can add this into my template and it will show up. But the thing is, I am not allowed to even touch the template. It's totally template independent. So, I have to add the meta tag only by doing something in my plugin code. I have tried wp_head action hook, but it is not working. Any idea of a workaround or anything to get the meta tag inside the head tags of the pages dynamically.

    Read the article

  • Is Work Stealing always the most appropriate user-level thread scheduling algorithm?

    - by Il-Bhima
    I've been investigating different scheduling algorithms for a thread pool I am implementing. Due to the nature of the problem I am solving I can assume that the tasks being run in parallel are independent and do not spawn any new tasks. The tasks can be of varying sizes. I went immediately for the most popular scheduling algorithm "work stealing" using lock-free deques for the local job queues, and I am relatively happy with this approach. However I'm wondering whether there are any common cases where work-stealing is not the best approach. For this particular problem I have a good estimate of the size of each individual task. Work-stealing does not make use of this information and I'm wondering if there is any scheduler which will give better load-balancing than work-stealing with this information (obviously with the same efficiency). NB. This question ties up with a previous question.

    Read the article

  • Creating managed file transfer software in java

    - by Shekhar
    Hello, I have been asked to do some POC on how we can provide a software solution which will be able to manage files. Manage files means it will be able to move files from source to destination servers. The client gave us 4 page document detailing what sort of software they are looking for. They dont want to use existing commercial softwares. They want to build their own customizable software. Has anybody worked on this type of project? Please provide your inputs on how should i approach this project. The software should be platform independent and should be built in java.

    Read the article

  • Sql query - how to construct

    - by Max Malmgren
    Hi. I am working to implement a dataconnection between my C# application and an Sql Express database. Please bear in mind I have not worked with Sql queries before. I have the following relevant tables: ArticlesCommon ArticlesLocalized CategoryCommon CategoryLocalized where ArticlesCommon holds language independent information such as price, weight etc. This is the statement for now: SELECT * FROM ArticlesCommon INNER JOIN ArticlesLocalized ON ArticlesCommon.ID = ArticlesLocalized.ID WHERE ArticlesLocalized.Language = @language ORDER BY ArticlesCommon.DateAdded ArticlesCommon contains a category id for each row. Now, I want to use this to look up the localized information in CategoryLocalized and add it to the result, something like SELECT *, CategoryLocalized.Name as CategoryName. If I have gotten my point across, is this doable? Thank you.

    Read the article

  • R error message about variable lengths

    - by Abraham
    I ran the following code in order to recode the variable. Unfortunately, when I move to run an logit model (using the Zelig package), I get an error message that the variable length differ for this variable. ## Independent Variable - Partisanship (ANES 2004) data04$V043114 part <- data04$V043114 attributes(part) summary(part) partb < part partb[part %in% levels(part)[4]] <- NA partb[part %in% levels(part)[5]] <- NA partb[part %in% levels(part)[6]] <- NA partb[part %in% levels(part)[7]] <- NA partb <- factor(partb) attributes(partb) summary(partb) table(partb) table(part, partb) cbind(part, partb) partisan041 <- partb partisan042 <- as.numeric(partb) summary(partisan041) summary(partisan042) ## Regression Model - ANES 2004 ## anes04one <- zelig(trade041a ~ age042 + education042 + personal042 + economy042 + partisan042 + employment042 + union042 + home042 + market042 + race042 + income042 + gender042, model="logit", data=data04) summary(anes04one) #Error in model.frame.default(formula = trade041a ~ age042 + education042 + : # variable lengths differ (found for 'partisan042')

    Read the article

  • How do people manage changes to common library files stored across mutiple (Mercurial) repositories?

    - by mckoss
    This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately. I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files. Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects). I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches). Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.

    Read the article

  • Screening (multi)collinearity in a reggresion model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >