Search Results

Search found 5568 results on 223 pages for 'dependency analysis'.

Page 192/223 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • Can I use manifests to consume a COM server without specifying its version?

    - by sharptooth
    Two of our programs use the same COM server (also made by us) with the same class ids. Each program when installing copies the COM server files into its folder and regsvr32s the COM server. The problem is how to install the COM server so that the user can install either one or both of our programs into different folders in any order and likely of different versions. Clearly it's impossible without changing the class ids and that's lots of hassle with configurations. Ideally we would like to use manifests for that and go without regsvr32. The problem is every time I read about reg-free COM (for example, here) there's the version attribute in the assemblyIdentity. The version number should change every nightly build and I totally don't like the idea of (automatically) adjusting it. I understand why specifying dependency on a specific version is good, but it's completely useless in our scenarion. Is there a way to write manifests for both the COM server and the consumer so that they don't specify versions and just work with whatever version of the file happens to be in the folder? Also is there a way to restrict the search to the same folder as the consumer?

    Read the article

  • Generating easy-to-remember random identifiers

    - by Carl Seleborg
    Hi all, As all developers do, we constantly deal with some kind of identifiers as part of our daily work. Most of the time, it's about bugs or support tickets. Our software, upon detecting a bug, creates a package that has a name formatted from a timestamp and a version number, which is a cheap way of creating reasonably unique identifiers to avoid mixing packages up. Example: "Bug Report 20101214 174856 6.4b2". My brain just isn't that good at remembering numbers. What I would love to have is a simple way of generating alpha-numeric identifiers that are easy to remember. Examples would be "azil3", "ulmops", "fel2way", etc. I just made these up, but they are much easier to recognize when you see many of them at once. I know of algorithms that perform trigram analysis on text (say you feed them a whole book in German) and that can generate strings that look and feel like German words. This requires lots of data, though, and makes it slightly less suitable for embedding in an application just for this purpose. Do you know of anything else? Thanks! Carl

    Read the article

  • How does CDI injection work in MDBs and @Scheduled beans?

    - by Nils-Petter Nilsen
    I'm working on a large Java EE 6 application that is deployed on JBoss 6 Final. My current tasks involve using @Inject consistently instead of @EJB, but I'm running into some problems on some types of beans, specifically @MessageDriven beans and beans with @Scheduled methods. What happens is that if I'm unlucky with the timing (for @Schedule) or if there are messages in the MDBs' queues at startup, instantiation of the beans will fail because the injected resources (which are EJBs themselves) are not bound yet. Because I use @Inject, I'm guessing that the EJB container considers my beans to be ready, since the container itself does not care about @Inject; it probably simply assumes that since there are no @EJB injections, the beans are ready for use. The injected CDI proxies will then fail because the resources to inject aren't actually bound yet. Tiny example: @Stateless @LocalBean public class MySupportingBean { public void doSomething() { ... } } @Singleton public class MyScheduledBean { @Inject private MySupportingBean supportingBean; @Schedule(second = "*/1", hour = "*", minute = "*", persistent = false) public void onTimeout() { supportingBean.doSomething(); } } The above example will probably not fail often because there are only two beans, but the project I'm working on binds lots of EJBs, which will amplify the problem. But it might fail because there is no guarantee that MySupportingBean is bound first, and if onTimeout is invoked before MySupportingBean is bound, then instantiation of MyScheduledBean will fail. If I used @EJB instead, MyScheduledBean wouldn't be bound until the dependency to MySupportingBean was satisfied. Note that the example will not fail in onTimeout itself, but when CDI attempts to inject MySupportingBean. I've read a lot of posts on different forums where many people argue that @Inject is always better. Generally, I agree, but how do they handle @Schedule or @MessageDriven combined with @Inject? In my experience, it comes down to dumb luck whether the beans will work or not in those cases, and the beans will fail arbitrarily, depending on which order the EJBs are deployed in, and when @Schedule or onMessage are invoked.

    Read the article

  • Strange Matlab error: "??? Subscript indices must either be real positive integers or logicals"

    - by Roee Adler
    I have a function func that returns a vector a. I usually plot a and then perform further analysis on it. I have a certain scenario when once I try to plot a, I get a "??? Subscript indices must either be real positive integers or logicals" error. Take a look at the following piece of code to see the vector's behavior: K>> a a = 5.7047 6.3529 6.4826 5.5750 4.1488 5.8343 5.3157 5.4454 K>> plot(a) ??? Subscript indices must either be real positive integers or logicals. K>> for i=1:length(a); b(i) = a(i); end; K>> b b = 5.7047 6.3529 6.4826 5.5750 4.1488 5.8343 5.3157 5.4454 K>> plot(b) ??? Subscript indices must either be real positive integers or logicals. The scenario where this happens is when I call function func from within another function (call it outer_func), and return the result directly as outer_func's result. When debugging inside outer_func, I can plot a properly, but outside the scope of outer_func, its result has the above behavior. What can cause this? Where do I start from?

    Read the article

  • Advanced All In One .NET Framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Could splitting the requirements on 3 or 4 existing open source framework be possible ? Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • Testing for interface implementation in WCF/SOA

    - by rabidpebble
    I have a reporting service that implements a number of reports. Each report requires certain parameters. Groups of logically related parameters are placed in an interface, which the report then implements: [ServiceContract] [ServiceKnownType(typeof(ExampleReport))] public interface IService1 { [OperationContract] void Process(IReport report); } public interface IReport { string PrintedBy { get; set; } } public interface IApplicableDateRangeParameter { DateTime StartDate { get; set; } DateTime EndDate { get; set; } } [DataContract] public abstract class Report : IReport { [DataMember] public string PrintedBy { get; set; } } [DataContract] public class ExampleReport : Report, IApplicableDateRangeParameter { [DataMember] public DateTime StartDate { get; set; } [DataMember] public DateTime EndDate { get; set; } } The problem is that the WCF DataContractSerializer does not expose these interfaces in my client library, thus I can't write the generic report generating front-end that I plan to. Can WCF expose these interfaces, or is this a limitation of the serializer? If the latter case, then what is the canonical approach to this OO pattern? I've looked into NetDataContractSerializer but it doesn't seem to be an officially supported implementation (which means it's not an option in my project). Currently I've resigned myself to including the interfaces in a library that is common between the service and the client application, but this seems like an unnecessary extra dependency to me. Surely there is a more straightforward way to do this? I was under the impression that WCF was supposed to replace .NET remoting; checking if an object implements an interface seems to be one of the most basic features required of a remoting interface?

    Read the article

  • Loading Native library to external Package in Eclipse not working. is it a Bug?

    - by TacB0sS
    I was about to report a but to Eclipse, but I was thinking to give this a chance here first: If I add an external package, the application cannot find the referenced native library, except in the case specified at the below: If my workspace consists of a single project, and I import an external package 'EX_package.jar' from a folder outside of the project folder, I can assign a folder to the native library location via: mouse over package - right click - properties - Native Library - Enter your folder. This does not work. In runtime the application does not load the library, System.mapLibraryName(Path) also does not work. Further more, if I create a User Library, and add the package to it and define a folder for the native library it still does not. If it works for you then I have a major bug since it does not work on my computer I test this in any combination I could think of, including adding the path to the windows PATH parameter, and so many other ways I can't even start to remember, nothing worked, I played with this for hours and had a colleague try to assist me, but we both came up empty. Further more, if I have a main project that is dependent on few other projects in my workspace, and they all need to use the same 'EX_package.jar' I MUST supply a HARD COPY INTO EACH OF THEM, it will ONLY (I can't stress the ONLYNESS, I got freaked out by this) work if I have a hard copy of the package in ALL of the project folders that the main project has a dependency on, and ONLY if I configure the Native path in each of them!! This also didn't do the trick. please tell me there is a solution to this, this drives me nuts... Thanks, Adam Zehavi.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Generic dataset handling library

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • Injecting Dependencies into Domain Model classes with Nhibernate (ASP.NET MVC + IOC)

    - by Sunday Ironfoot
    I'm building an ASP.NET MVC application that uses a DDD (Domain Driven Design) approach with database access handled by NHibernate. I have domain model class (Administrator) that I want to inject a dependency into via an IOC Container such as Castle Windsor, something like this: public class Administrator { public virtual int Id { get; set; } //.. snip ..// public virtual string HashedPassword { get; protected set; } public void SetPassword(string plainTextPassword) { IHashingService hasher = IocContainer.Resolve<IHashingService>(); this.HashedPassword = hasher.Hash(plainTextPassword); } } I basically want to inject IHashingService for the SetPassword method without calling the IOC Container directly (because this is suppose to be an IOC Anti-pattern). But I'm not sure how to go about doing it. My Administrator object either gets instantiated via new Administrator(); or it gets loaded via NHibernate, so how would I inject the IHashingService into the Administrator class? On second thoughts, am I going about this the right way? I was hoping to avoid having my codebase littered with... currentAdmin.Password = HashUtils.Hash(password, Algorithm.Sha512); ...and instead get the domain model itself to take care of hashing and neatly encapsulate it away. I can envisage another developer accidently choosing the wrong algorithm and having some passwords as Sha512, and some as MD5, some with one salt, and some with a different salt etc. etc. Instead if developers are writing... currentAdmin.SetPassword(password); ...then that would hide those details away and take care of those problems listed above would it not?

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • html body inside if condition

    - by gautam
    Hi, i have two different bodies for different conditions. can i do like this.... <% if(yes) %> <body onload="JavaScript:timedRefresh(120000);"> <% } else { %> <body> <% } %> Please help me... Thanks in advance.. I want to compare my title in if condition how can i do??? currently i am doing like this: <head> <title> <tiles:getAsString name="title"/> </title> </head> <% if(%><tiles:getAsString name="title"/><%.equals("Trade Confirmation Analysis Screen")) %> <body onload="JavaScript:timedRefresh(120000);"> <%else { %> <body> <% } %> Its giving me error..Unable to compile class for JSP:

    Read the article

  • writing into csv file

    - by arin
    I recived alarms in a form of text as the following NE=JKHGDUWJ3 Alarm=Data Center STALZ AC Failure Occurrence=2012/4/3 22:18:19 GMT+02:00 Clearance=Details=Cabinet No.=0, Subrack No.=40, Slot No.=0, Port No.=5, Board Type=EDS, Site No.=49, Site Type=HSJYG500 GSM, Site Name=Google1 . I need to fill it in csv file so later I can perform some analysis I came up with this namespace AlarmSaver { public partial class Form1 : Form { string filesName = "c:\\alarms.csv"; public Form1() { InitializeComponent(); } private void buttonSave_Click(object sender, EventArgs e) { if (!File.Exists(filesName)) { string header = "NE,Alarm,Occurrence,Clearance,Details"; File.AppendAllText(filesName,header+Environment.NewLine); } StringBuilder sb = new StringBuilder(); string line = textBoxAlarm.Text; int index = line.IndexOf(" "); while (index > 0) { string token = line.Substring(0, index).Trim(); line = line.Remove(0, index + 2); string[] values = token.Split('='); if (values.Length ==2) { sb.Append(values[1] + ","); } else { if (values.Length % 2 == 0) { string v = token.Remove(0, values[0].Length + 1).Replace(',', ';'); sb.Append(v + ","); } else { sb.Append("********" + ","); string v = token.Remove(0, values[0].Length + 1 + values[1].Length + 1).Replace(',', ';'); sb.Append(v + ","); } } index = line.IndexOf(" "); } File.AppendAllText(filesName, sb.ToString() + Environment.NewLine); } } } the results are as I want except when I reach the part of Details=Cabinet No.=0, Subrack No.=40, Slot No.=0, Port No.=5, Board Type=KDL, Site No.=49, Site Type=JDKJH99 GSM, Site Name=Google1 . I couldnt split them into seperate fields. as the results showing is as I want to split the details I want each element in details to be in a column to be somthin like its realy annoying :-) please help , thank you in advance

    Read the article

  • DLL Exports: not all my functions are exported

    - by carmellose
    I'm trying to create a Windows DLL which exports a number of functions, howver all my functions are exported but one !! I can't figure it out. The macro I use is this simple one : __declspec(dllexport) void myfunction(); It works for all my functions except one. I've looked inside Dependency Walker and here they all are, except one. How can that be ? What would be the cause for that ? I'm stuck. Edit: to be more precise, here is the function in the .h : namespace my { namespace great { namespace namespaaace { __declspec(dllexport) void prob_dump(const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ); }}} And in the .cpp file it goes like this: namespace my { namespace great { namespace namespaaace { namespace { void dump_mtx(std::ostream& ostr, const double *mtx, int rows, int cols, const char *ind = 0) { /* some random code there, nothing special, no statics whatsoever */ } } // end anonymous namespace here // dump the problem specification into a file void prob_dump( const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ) { std::ofstream fout; fout.open(filename, std::ios::trunc); /* implementation there */ dump_mtx(fout, Q, nx, nx); } }}} Thanks

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Refreshing WEB-INF/lib in Google App Engine (with Eclipse)

    - by Adrian Petrescu
    Hi, I've created a new Google App Engine project within Eclipse. I copied several JARs that I need for my application into the WEB-INF/lib directory, and add them to the build path. I make some random calls to these JARs from within the handler, deploy, and everything works fine. However, if I then change one of the JARs outside the project, and copy the new version to WEB-INF/lib (with the same name) and re-deploy, it doesn't seem to be sending the new JAR; everything is still linking to the old one even though it's not even in my WEB-INF/lib anymore. I'm guessing it's being cached by the server or Eclipse is not even realizing something has changed in order to upload the new version. If I just create a new project with the new JAR, everything is fine again (until I have to make another change...) but of course I don't want to have to create a new project for every change to a dependency I make. My question is, how can I make GAE re-upload all the JARs I have from within Eclipse? Thanks in advance, guys :) -Adrian

    Read the article

  • Which to use, XMP or RDF?

    - by zotty
    What's the difference between RDF and XMP? From what I can tell, XMP is derived from RDF... so what does it offer that RDF doesn't? My particular situation is this: I've got some images which need tagging with details of how an experiment was performed, and what sort of data analysis has been performed on the images. A colleague of mine is pushing for XMP, but he's thinking of the images as photos - they're not really, they're just bits of data. From what I've seen (mainly by opening images in notepad++) the XMP data looks very similar to RDF - even so far as using RDF in the tag names (e.g. <rdf:Seq>). I'd like this data to be usable by other people who use similar instruments for similar experiments, so creating a mini standard (schema?) seems like the way to go. Apologies for the lack of fundemental understanding - I'm a Doctor, not a programmer! If it makes any difference, the language of choice will be C#. Edit for more information: First off, thanks for the excellent replies - thinking of XMP as a vocabulary for RDF makes things a lot clearer. The sort of data I'll be storing wont be avaliable in any of the pre-defined sets. It'll detail experimental set ups, locations and results. I think using RDF is the way to go.

    Read the article

  • How to hide labels in html

    - by user225269
    I have this code which depends on this javascript from dynamic drive: http://dynamicdrive.com/dynamicindex16/formdependency.htm Its a form dependency manager. Which will show or hide elements depending on what you choose from the forms that are revealed: I don't know why but If I alter the code from this: <td> <label>ID Number<input type="checkbox" name="id" class="DEPENDS ON info BEING student"></label> </td> </tr> To this: <td> <input type="hidden" class="DEPENDS ON info BEING student"> <label style="margin-bottom: 1em; padding-bottom: 1em; border-bottom: 3px silver groove;"></label> <tr> <td> <input type="checkbox" name="id" class="DEPENDS ON info BEING student"><label>ID Number</label> </td> </tr> Note: The only change that I did here is to transfer the label from the left side of the checkbox to the right side so that they will look better. But when I do this. The labels will be visible even if I did not yet click on the button that will make it visible(but without the checkboxes). And when I click on that button, that's the only time when the checkbox will appear on the left side of the labels. The checkbox is ok, but the label is not. And the javascript is working properly. What do I do so that the checkbox and label will go together?

    Read the article

  • How do I get started designing and implementing a script interface for my .NET application?

    - by Peter Mortensen
    How do I get started designing and implementing a script interface for my .NET application? There is VSTA (the .NET equivalent of VBA for COM), but as far as I understand I would have to pay a license fee for every installation of my application. It is an open source application so this will not work. There is also e.g. the embedding of interpreters (IronPython?), but I don't understand how this would allow exposing an "object model" (see below) to external (or internal) scripts. What is the scripting interface story in .NET? Is it somehow trivial in .NET to do this? Background: I have once designed and implemented a fairly involved script interface for a Macintosh application for acquisition and analysis of data from a mass spectrometer (Mac OS, System 7) and later a COM interface for a Windows application. Both were designed with an "object model" and classes (that can have properties). These are overloaded words, but in a scripting interface context object model is essentially a containment hiarchy of objects of specific classes. Classes have properties and lists of contained objects. E.g. like in the COM interfaces exposed in Microsoft Office applications, where the application object can be used to add to its list of documents (with the side effect of creating the GUI representation of a document). External scripts can create new objects in a container and navigate through the content of the hiarchy at any given time. In the Macintosh case scripts could be written in e.g. AppleScript or Frontier. On the Macintosh the implementation of a scripting interface was very complicated. Support for it in Metroworks' C++ class library (the name escapes me right now) made it much simpler.

    Read the article

  • 64-bit Alternative for Microsoft Jet

    - by David Robison
    Microsoft has chosen to not release a 64-bit version of Jet, their database driver for Access. Does anyone know of a good alternative? Here are the specific features that Jet supports that I need: Multiple users can connect to database over a network. Users can use Windows Explorer to copy the database while it is open without risking corruption. Access currently does this with enough reliability for what my customers need. Works well in C++ without requiring .Net. Alternatives I've considered that I do not think could work (though my understanding could be incorrect): SQLite: If multiple users connect to the database over a network, it will become corrupted. Firebird: Copying a database that is in use can corrupt the original database. SQL Server: Files in use are locked and cannot be copied. VistaDB: This appears to be .Net specific. Compile in 32-bit and use WOW64: There is another dependency that requires us to compile in 64-bit, even though we don't use any 64-bit functionality.

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • How do I install the OpenSSL C++ library on Ubuntu?

    - by Daryl Spitzer
    I'm trying to build some code on Ubuntu 10.04 LTS that uses OpenSSL 1.0.0. When I run make, it invokes g++ with the "-lssl" option. The source includes: #include <openssl/bio.h> #include <openssl/buffer.h> #include <openssl/des.h> #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/rsa.h> I ran: $ sudo apt-get install openssl Reading package lists... Done Building dependency tree Reading state information... Done openssl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. But I guess the openssl package doesn't include the library. I get these errors on make: foo.cpp:21:25: error: openssl/bio.h: No such file or directory foo.cpp:22:28: error: openssl/buffer.h: No such file or directory foo.cpp:23:25: error: openssl/des.h: No such file or directory foo.cpp:24:25: error: openssl/evp.h: No such file or directory foo.cpp:25:25: error: openssl/pem.h: No such file or directory foo.cpp:26:25: error: openssl/rsa.h: No such file or directory How do I install the OpenSSL C++ library on Ubuntu 10.04 LTS? I did a man g++ and (under "Options for Linking") for the -l option it states: " The linker searches a standard list of directories for the library..." and "The directories searched include several standard system directories..." What are those standard system directories?

    Read the article

  • how to ignore ivy revision number?

    - by user315228
    Guys, I have certain jar files without revision number. But as rev is mandatory attribute for ivy dependency, i am providing the revision attribute. But i have something like (-[revision]) in url resolver. but its taking the module number instead of ignoring the revision attribute. I know it wont ignore the revision attribute as its not null. Following is the output that i get default-cache: no cached resolved revision for perltools#perltools;latest.integration [ivy:retrieve] tried [ivy:retrieve] listing all in [ivy:retrieve] using privateRepo to list all in [ivy:retrieve] ApacheURLLister found URL=[httP://myrepo/ivyRepository/perltools/jars/perltools.jar]. [ivy:retrieve] found 1 resources [ivy:retrieve] found revs: [perltools.jar] [ivy:retrieve] HTTP response status: 404 url=httP://myrepo/ivyRepository/perltools/jars/perltools.jar/perltools-perltools.jar.jar [ivy:retrieve] CLIENT ERROR: Not Found url=httP://myrepo/ivyRepository/perltools/jars/perltools.jar/perltools-perltools.jar.jar Can somebody please explain why its taking module.ext as revision where revision i specified is latest.integration and in myrepo, i dont have revision attribute. its just has [http://myrepo/ivyRepository/perltools/jars//perltools.jar] Can somebody please help me so that i can avoid revision attribute?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >