Search Results

Search found 14339 results on 574 pages for 'domain rename'.

Page 555/574 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • How to use constraint programming for optimizing shopping baskets?

    - by tangens
    I have a list of items I want to buy. The items are offered by different shops and different prices. The shops have individual delivery costs. I'm looking for an optimal shopping strategy (and a java library supporting it) to purchase all of the items with a minimal total price. Example: Item1 is offered at Shop1 for $100, at Shop2 for $111. Item2 is offered at Shop1 for $90, at Shop2 for $85. Delivery cost of Shop1: $10 if total order < $150; $0 otherwise Delivery cost of Shop2: $5 if total order < $50; $0 otherwise If I buy Item1 and Item2 at Shop1 the total cost is $100 + $90 +$0 = $190. If I buy Item1 and Item2 at Shop2 the total cost is $111 + $85 +$0 = $196. If I buy Item1 at Shop1 and Item2 at Shop2 the total cost is $100 + $10 + $85 + $0 = 195. I get the minimal price if I order Item1 and Item2 at Shop1: $190 What I tried so far I asked another question before that led me to the field of constraint programming. I had a look at cream and choco, but I did not figure out how to create a model to solve my problem. | shop1 | shop2 | shop3 | ... ----------------------------------------- item1 | p11 | p12 | p13 | item2 | p21 | p22 | p23 | . | | | | . | | | | ----------------------------------------- shipping | s1 | s2 | s3 | limit | l1 | l2 | l3 | ----------------------------------------- total | t1 | t2 | t3 | ----------------------------------------- My idea was to define these constraints: each price "p xy" is defined in the domain (0, c) where c is the price of the item in this shop only one price in a line should be non zero if one or more items are bought from one shop and the sum of the prices is lower than limit, then add shipping cost to the total cost shop total cost is the sum of the prices of all items in a shop total cost is the sum of all shop totals The objective is "total cost". I want to minimize this. In cream I wasn't able to express the "if then" constraint for conditional shipping costs. In choco these constraints exist, but even for 5 items and 10 shops the program was running for 10 minutes without finding a solution. Question How should I express my constraints to make this problem solvable for a constraint programming solver?

    Read the article

  • Looking for RESTful Suggestions In Porting ASP.NET to MVC.NET

    - by DaveDev
    I've been tasked with porting/refactoring a Web Application Platform that we have from ASP.NET to MVC.NET. Ideally I could use all the existing platform's configurations to determine the properties of the site that is presented. Is it RESTful to keep a SiteConfiguration object which contains all of our various page configuration data in the System.Web.Caching.Cache? There are a lot of settings that need to be loaded when the user acceses our site so it's inefficient for each user to have to load the same settings every time they access. Some data the SiteConfiguration object contains is as follows and it determines what Master Page / site configuration / style / UserControls are available to the client, public string SiteTheme { get; set; } public string Region { private get; set; } public string DateFormat { get; set; } public string NumberFormat { get; set; } public int WrapperType { private get; set; } public string LabelFileName { get; set; } public LabelFile LabelFile { get; set; } // the following two are the heavy ones // PageConfiguration contains lots of configuration data for each panel on the page public IList<PageConfiguration> Pages { get; set; } // This contains all the configurations for the factsheets we produce public List<ConfiguredFactsheet> ConfiguredFactsheets { get; set; } I was thinking of having a URL structure like this: www.MySite1.com/PageTemplate/UserControl/ the domain determines the SiteConfiguration object that is created, where MySite1.com is SiteId = 1, MySite2.com is SiteId = 2. (and in turn, style, configurations for various pages, etc.) PageTemplate is the View that will be rendered and simply defines a layout for where I'm going to inject the UserControls Can somebody please tell me if I'm completely missing the RESTful point here? I'd like to refactor the platform into MVC because it's better to work in but I want to do it right but with a minimum of reinventing-the-wheel because otherwise it won't get approval. Any suggestions otherwise? Thanks

    Read the article

  • Maven2 not compiling source

    - by Grep
    Hi, I am new to Maven2 and have been having an issue with the compiling of the source for a RAR. Maven currently copies all the information located at my sourceDirectory instead of compiling it even though the console states that the source is being compiled. If I navigate to the sourceDirectory I find all my source files with no compiled class files. I have tried changing my outputDirectory to a different location and when I inspect this directory I find that it only copied my source files instead of compiling them. I have added an example of my POM. Any advice would be appreciated and please keep in mind I have only started learning/implementing Maven2 yesterday :) <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>MyApp</artifactId> <groupId>.</groupId> <version>4</version> <packaging>rar</packaging> <name>MyApp rar</name> <build> <directory>target\${artifactId}-${version}</directory> <resources> <resource> <directory>..\gen-src</directory> <excludes> <exclude>util.jar</exclude> </excludes> </resource> <resource> <directory>..\domain</directory> </resource> </resources> <sourceDirectory>target\${artifactId}-${version}</sourceDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-rar-plugin</artifactId> <configuration> <workDirectory>target\${artifactId}-${version}\classes</workDirectory> <outputDirectory>target</outputDirectory> <finalName>MyApp${version}</finalName> <raXmlFile>${workDirectory}\META-INF\ra.xml</raXmlFile> </configuration> </plugin> </plugins> </build> </project>

    Read the article

  • Best practices managing JavaScript on a single-page app

    - by seanmonstar
    With a single page app, where I change the hash and load and change only the content of the page, I'm trying to decide on how to manage the JavaScript that each "page" might need. I've already got a History module monitoring the location hash which could look like domain.com/#/company/about, and a Page class that will use XHR to get the content and insert it into the content area. function onHashChange(hash) { var skipCache = false; if(hash in noCacheList) { skipCache = true; } new Page(hash, skipCache).insert(); } // Page.js var _pageCache = {}; function Page(url, skipCache) { if(!skipCache && (url in _pageCache)) { return _pageCache[url]; } this.url = url; this.load(); } The cache should let pages that have already been loaded skip the XHR. I also am storing the content into a documentFragment, and then pulling the current content out of the document when I insert the new Page, so I the browser will only have to build the DOM for the fragment once. Skipping the cache could be desired if the page has time sensitive data. Here's what I need help deciding on: It's very likely that any of the pages that get loaded will have some of their own JavaScript to control the page. Like if the page will use Tabs, needs a slide show, has some sort of animation, has an ajax form, or what-have-you. What exactly is the best way to go around loading that JavaScript into the page? Include the script tags in the documentFragment I get back from the XHR? What if I need to skip the cache, and re-download the fragment. I feel the exact same JavaScript being called a second time might cause conflicts, like redeclaring the same variables. Would the better way be to attach the scripts to the head when grabbing the new Page? That would require the original page know all the assets that every other page might need. And besides knowing the best way to include everything, won't I need to worry about memory management, and possible leaks of loading so many different JavaScript bits into a single page instance?

    Read the article

  • Looking for an Open Source Project in need of help

    - by hvidgaard
    Hi StackOverflow! I'm a CS student on well on my way to graduate. I have had a difficult time of finding relevant student jobs (they seems to be taken merely hours after the notice gets on the board) , so instead I'm looking for an open source project in need of help. I'm aware that I should choose one that I use, but I'm not aware of any OS-project that I use that needs help. That's why I'm asking you. I don't have any deep experience, but I here are some of my biggest projects so far: BitTorrent-ish client in Python (a subset of BitTorrent) HTTP 1.1 webserver in Java Compiler from a subset of Java to run on JRE Flash-framework project to model an iPad look and feel (not to run actual iPad programs) complete with an API for programs. Complete MySQL database for a booking system, with departure and arrival times, so you could only book valid tickets (with a Java frontend). I know, Java and languages like AS3 and C# feels natural per se, Python, and have done a fair bit of hacking around in C, but I don't feel very comfortable with it. Mostly I'm afraid to make a fuckup because I have such a high degree of control. I would like to think I'm well aware of good software design practices, but in reality what I do is ask myself "would I like to use/maintain this?", and I love to refactor my code because I see optimizations. I love algorithms and to make them run in the best possible time. I don't have any preferred domain to work in, but I wouldn't mind it to be graphics or math heavy. Ideally I'm looking for a project in C++ to learn the in's and out's of it, but I'm well aware that I don't know that language very well. I would like to have a mentor-like figure until I'm confident enough to stand on my own, not one to review all my code (I'm sure someone will to start with anyway), but to ask questions about the project and language in question. I do have a wife and two children, so don't expect me to put in 10+ hours every week. In return I can work on my own, I strive to program modular and maintainable code. Know how to read an API, use Google, StackOverflow and online resources in general. If you have any questions, shoot. I'm looking forward to your suggestions.

    Read the article

  • How to perform duplicate key validation using entlib (or DataAnnotations), MVC, and Repository pattern

    - by olivehour
    I have a set of ASP.NET 4 projects that culminate in an MVC (3 RC2) app. The solution uses Unity and EntLib Validation for cross-cutting dependency injection and validation. Both are working great for injecting repository and service layer implementations. However, I can't figure out how to do duplicate key validation. For example, when a user registers, we want to make sure they don't pick a UserID that someone else is already using. For this type of validation, the validating object must have a repository reference... or some other way to get an IQueryable / IEnumerable reference to check against other rows already in the DB. What I have is a UserMetadata class that has all of the property setters and getters for a user, along with all of the appropriate DataAnnotations and EntLib Validation attributes. There is also a UserEntity class implemented using EF4 POCO Entity Generator templates. The UserEntity depends on UserMetadata, because it has a MetadataTypeAttribute. I also have a UserViewModel class that has the same exact MetadataType attribute. This way, I can apply the same validation rules, via attributes, to both the entity and viewmodel. There are no concrete references to the Repository classes whatsoever. All repositories are injected using Unity. There is also a service layer that gets dependency injection. In the MVC project, service layer implementation classes are injected into the Controller classes (the controller classes only contain service layer interface references). Unity then injects the Repository implementations into the service layer classes (service classes also only contain interface references). I've experimented with the DataAnnotations CustomValidationAttribute in the metadata class. The problem with this is the validation method must be static, and the method cannot instantiate a repository implementation directly. My repository interface is IRepository, and I have only one single repository implementation class defined as EntityRepository for all domain objects. To instantiate a repository explicitly I would need to say new EntityRepository(), which would result in a circular dependency graph: UserMetadata [depends on] DuplicateUserIDValidator [depends on] UserEntity [depends on] UserMetadata. I've also tried creating a custom EntLib Validator along with a custom validation attribute. Here I don't have the same problem with a static method. I think I could get this to work if I could just figure out how to make Unity inject my EntityRepository into the validator class... which I can't. Right now, all of the validation code is in my Metadata class library, since that's where the custom validation attribute would go. Any ideas on how to perform validations that need to check against the current repository state? Can Unity be used to inject a dependency into a lower-layer class library?

    Read the article

  • PHP 'instanceof' failing with class constant

    - by Nathan Loding
    I'm working on a framework that I'm trying to type as strongly as I possibly can. (I'm working within PHP and taking some of the ideas that I like from C# and trying to utilize them within this framework.) I'm creating a Collection class that is a collection of domain entities/objects. It's kinda modeled after the List<T> object in .Net. I've run into an obstacle that is preventing me from typing this class. If I have a UserCollection, it should only allow User objects into it. If I have a PostCollection, it should only allow Post objects. All Collections in this framework need to have certain basic functions, such as add, remove, iterate. I created an interface, but found that I couldn't do the following: interface ICollection { public function add($obj) } class PostCollection implements ICollection { public function add(Post $obj) {} } This broke it's compliance with the interface. But I can't have the interface strongly typed because then all Collections are of the same type. So I attempted the following: interface ICollection { public function add($obj) } abstract class Collection implements ICollection { const type = 'null'; } class PostCollection { const type = 'Post'; public function add($obj) { if(!($obj instanceof self::type)) { throw new UhOhException(); } } } When I attempt to run this code, I get syntax error, unexpected T_STRING, expecting T_VARIABLE or '$' on the instanceof statement. A little research into the issue and it looks like the root of the cause is that $obj instanceof self is valid to test against the class. It appears that PHP doesn't process the entire self::type constant statement in the expression. Adding parentheses around the self::type variable threw an error regarding an unexpected '('. An obvious workaround is to not make the type variable a constant. The expression $obj instanceof $this->type works just fine (if $type is declared as a variable, of course). I'm hoping that there's a way to avoid that, as I'd like to define the value as a constant to avoid any possible change in the variable later. Any thoughts on how I can achieve this, or have I take PHP to it's limit in this regard? Is there a way of "escaping" or encapsulating self::this so that PHP won't die when processing it?

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • Hibernate MappingException Unknown entity: $Proxy2

    - by slynn1324
    I'm using Hibernate annotations and have a VERY basic data object: import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class State implements Serializable { /** * */ private static final long serialVersionUID = 1L; @Id private String stateCode; private String stateFullName; public String getStateCode() { return stateCode; } public void setStateCode(String stateCode) { this.stateCode = stateCode; } public String getStateFullName() { return stateFullName; } public void setStateFullName(String stateFullName) { this.stateFullName = stateFullName; } } and am trying to run the following test case: public void testCreateState(){ Session s = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction t = s.beginTransaction(); State state = new State(); state.setStateCode("NE"); state.setStateFullName("Nebraska"); s.save(s); t.commit(); } and get an org.hibernate.MappingException: Unknown entity: $Proxy2 at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:628) at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1366) at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:121) .... I haven't been able to find anything referencing the $Proxy part of the error - and am at a loss.. Any pointers to what I'm missing would be greatly appreciated. hibernate.cfg.xml <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="connection.url">jdbc:hsqldb:hsql://localhost/xdb</property> <property name="connection.username">sa</property> <property name="connection.password"></property> <property name="current_session_context_class">thread</property> <property name="dialect">org.hibernate.dialect.HSQLDialect</property> <property name="show_sql">true</property> <property name="hbm2ddl.auto">update</property> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <mapping class="com.test.domain.State"/> in HibernateUtil.java public static SessionFactory getSessionFactory(boolean testing ) { if ( sessionFactory == null ){ try { String configPath = HIBERNATE_CFG; AnnotationConfiguration config = new AnnotationConfiguration(); config.configure(configPath); sessionFactory = config.buildSessionFactory(); } catch (Exception e){ e.printStackTrace(); throw new ExceptionInInitializerError(e); } } return sessionFactory; }

    Read the article

  • Creating AST for shared and local variables

    - by Rizwan Abbasi
    Here is my grammar grammar simulator; options { language = Java; output = AST; ASTLabelType=CommonTree; } //imaginary tokens tokens{ SHARED; LOCALS; BOOL; RANGE; ARRAY; } parse : declaration+ ; declaration :variables ; variables : locals ; locals : (bool | range | array) ; bool :ID 'in' '[' ID ',' ID ']' ('init' ID)? -> ^(BOOL ID ID ID ID?) ; range : ID 'in' '[' INT '..' INT ']' ('init' INT)? -> ^(RANGE ID INT INT INT?) ; array :ID 'in' 'array' 'of' '[' INT '..' INT ']' ('init' INT)? -> ^(ARRAY ID INT INT INT?) ; ID : (('a'..'z' | 'A'..'Z'|'_')('a'..'z' | 'A'..'Z'|'0'..'9'|'_'))* ; INT : ('0'..'9')+ ; WHITESPACE : ('\t' | ' ' | '\r' | '\n' | '\u000C')+ {$channel = HIDDEN;} ; INPUT flag in [down, up] init down pc in [0..7] init 0 CA in array of [0..5] init 0 AST It is having a small problem. Variables (bool, range or array) can be of two abstract types 1. locals (each object will have it's own variable) 2. shared (think of static in java, same for all object) Now the requirements are changed. I want the user to input like this NEW INPUT domains: upDown [up,down] possibleStates [0-7] booleans [true,false] locals: pc in possibleStates init 0 flag in upDown init down flag1 in upDown init down map in array of booleans init false shared: pcs in possibleStates init 0 flag in upDown init down flag1 in upDown init down maps in array of booleans init false Again, all the variables can be of two types (of any domain sepecified) 1. Local 2. Shared In Domains: upDown [up,down] possibleStates [0-7] upDown, up, down and possibleStates are of type ID (ID is defined in my above grammar), 0 and 7 are of type INT Can any body help me how to convert my current grammar to meet new specifications.

    Read the article

  • Finding the most frequent subtrees in a collection of (parse) trees

    - by peter.murray.rust
    I have a collection of trees whose nodes are labelled (but not uniquely). Specifically the trees are from a collection of parsed sentences (see http://en.wikipedia.org/wiki/Treebank). I wish to extract the most common subtrees from the collection - performance is not (yet) an issue. I'd be grateful for algorithms (ideally Java) or pointers to tools which do this for treebanks. Note that order of child nodes is important. EDIT @mjv. We are working in a limited domain (chemistry) which has a stylised language so the varirty of the trees is not huge - probably similar to children's readers. Simple tree for "the cat sat on the mat". <sentence> <nounPhrase> <article/> <noun/> </nounPhrase> <verbPhrase> <verb/> <prepositionPhrase> <preposition/> <nounPhrase> <article/> <noun/> </nounPhrase> </prepositionPhrase> </verbPhrase> </sentence> Here the sentence contains two identical part-of-speech subtrees (the actual tokens "cat". "mat" are not important in matching). So the algorithm would need to detect this. Note that not all nounPhrases are identical - "the big black cat" could be: <nounPhrase> <article/> <adjective/> <adjective/> <noun/> </nounPhrase> The length of sentences will be longer - between 15 to 30 nodes. I would expect to get useful results from 1000 trees. If this does not take more than a day or so that's acceptable. Obviously the shorter the tree the more frequent, so nounPhrase will be very common. EDIT If this is to be solved by flattening the tree then I think it would be related to Longest Common Substring, not Longest Common Sequence. But note that I don't necessarily just want the longest - I want a list of all those long enough to be "interesting" (criterion yet to be decided).

    Read the article

  • Rails send mail with GMail

    - by Danny McClelland
    Hi Everyone, I am on rails 2.3.5 and have the latest Ruby installed and my application is running well, except, GMail emails. I am trying to setup my gmail imap connection which has worked previously but now doesnt want to know. This is my code: # Be sure to restart your server when you modify this file # Uncomment below to force Rails into production mode when # you don't control web/app server and can't set it the proper way # ENV['RAILS_ENV'] ||= 'production' # Specifies gem version of Rails to use when vendor/rails is not present RAILS_GEM_VERSION = '2.3.5' unless defined? RAILS_GEM_VERSION # Bootstrap the Rails environment, frameworks, and default configuration require File.join(File.dirname(__FILE__), 'boot') Rails::Initializer.run do |config| # Gems config.gem "capistrano-ext", :lib => "capistrano" config.gem "configatron" # Make Time.zone default to the specified zone, and make Active Record store time values # in the database in UTC, and return them converted to the specified local zone. config.time_zone = "London" # The internationalization framework can be changed to have another default locale (standard is :en) or more load paths. # All files from config/locales/*.rb,yml are added automatically. # config.i18n.load_path << Dir[File.join(RAILS_ROOT, 'my', 'locales', '*.{rb,yml}')] #config.i18n.default_locale = :de # Your secret key for verifying cookie session data integrity. # If you change this key, all old sessions will become invalid! # Make sure the secret is at least 30 characters and all random, # no regular words or you'll be exposed to dictionary attacks. config.action_controller.session = { :session_key => '_base_session', :secret => '7389ea9180b15f1495a5e73a69a893311f859ccff1ffd0fa2d7ea25fdf1fa324f280e6ba06e3e5ba612e71298d8fbe7f15fd7da2929c45a9c87fe226d2f77347' } config.active_record.observers = :user_observer end ActiveSupport::CoreExtensions::Date::Conversions::DATE_FORMATS.merge!(:default => '%d/%m/%Y') ActiveSupport::CoreExtensions::Time::Conversions::DATE_FORMATS.merge!(:default => '%d/%m/%Y') require "will_paginate" ActionMailer::Base.delivery_method = :smtp ActionMailer::Base.smtp_settings = { :enable_starttls_auto => true, :address => "smtp.gmail.com", :port => 587, :domain => "XXXXXXXX.XXX", :authentication => :plain, :user_name => "XXXXXXXXXX.XXXXXXXXXX.XXX", :password => "XXXXX" } But the above just results in an SMTP auth error in the production log. I have read varied reports of this not working in Rails 2.2.2 but nothing for 2.3.5, anyone got any ideas? Thanks, Danny

    Read the article

  • Any way to allow classes implementing IEntity and downcast to have operator == comparisons?

    - by George Mauer
    Basically here's the issue. All entities in my system are identified by their type and their id. new Customer() { Id = 1} == new Customer() {Id = 1}; new Customer() { Id = 1} != new Customer() {Id = 2}; new Customer() { Id = 1} != new Product() {Id = 1}; Pretty standard scenario. Since all Entities have an Id I define an interface for all entities. public interface IEntity { int Id { get; set;} } And to simplify creation of entities I make public abstract class BaseEntity<T> : where T : IEntity { int Id { get; set;} public static bool operator ==(BaseEntity<T> e1, BaseEntity<T> e2) { if (object.ReferenceEquals(null, e1)) return false; return e1.Equals(e2); } public static bool operator !=(BaseEntity<T> e1, BaseEntity<T> e2) { return !(e1 == e2); } } where Customer and Product are something like public class Customer : BaseEntity<Customer>, IEntity {} public class Product : BaseEntity<Product>, IEntity {} I think this is hunky dory. I think all I have to do is override Equals in each entity (if I'm super clever, I can even override it only once in the BaseEntity) and everything with work. So now I'm expanding my test coverage and find that its not quite so simple! First of all , when downcasting to IEntity and using == the BaseEntity< override is not used. So what's the solution? Is there something else I can do? If not, this is seriously annoying. Upadate It would seem that there is something wrong with my tests - or rather with comparing on generics. Check this out [Test] public void when_created_manually_non_generic() { // PASSES! var e1 = new Terminal() {Id = 1}; var e2 = new Terminal() {Id = 1}; Assert.IsTrue(e1 == e2); } [Test] public void when_created_manually_generic() { // FAILS! GenericCompare(new Terminal() { Id = 1 }, new Terminal() { Id = 1 }); } private void GenericCompare<T>(T e1, T e2) where T : class, IEntity { Assert.IsTrue(e1 == e2); } Whats going on here? This is not as big a problem as I was afraid, but is still quite annoying and a completely unintuitive way for the language to behave. Update Update Ah I get it, the generic implicitly downcasts to IEntity for some reason. I stand by this being unintuitive and potentially problematic for my Domain's consumers as they need to remember that anything happening within a generic method or class needs to be compared with Equals()

    Read the article

  • Google Maps rendering locally but not in live environment

    - by marcusstarnes
    I have a page that renders a simple google map for a specified location. This map renders without any problems at all when I run it locally on localhost, however, when I deploy this code to our live web servers (using our LIVE google API key for the appropriate domain) it fails to render, and upon putting a series of alerts within the javascript on the page, it appears that the 'Initialize' method (which should be called within body onLoad) is not being called. When I view the HTML source that is rendered on the live server it appears exactly as per the local version of the site (including the call to initialize() within the body onLoad event), albeit with the different maps API key. I have output the host (alert(window.location.host);) to ensure that the key I generated via the google maps api site, corresponds exactly to the live server, which it does. Does anyone have any ideas why it would be working locally but not when deployed to the live servers? The live site is hosted on 2 load-balanced web servers. This is the javascript that is rendered: <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=false&amp;key=ABQIAAAA-BU8POZj19wRlTaKIXVM9xTz76xxk4yAELG9u79oXrhnLTB5NRRvAZ-bkKn1x8J68nfRTVOIWNPJEA" type="text/javascript"></script> <script type="text/javascript"> var map; var geocoder; alert(window.location.host); function initialize() { if (GBrowserIsCompatible()) { map = new GMap2(document.getElementById("businessMap")); map.setUIToDefault(); geocoder = new GClientGeocoder(); showAddress('St Margarets Street SW1P 3 London'); } } function showAddress(address) { geocoder.getLatLng( address, function(point) { if (!point) { // Address could not be located. jQuery('#googleMap').hide(); } else { map.setCenter(point, 13); var marker = new GMarker(point); map.addOverlay(marker); var html = 'Address info for the marker'; marker.openInfoWindow(html); GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(html); }); } } ); } </script> Any help would be much appreciated. Thanks.

    Read the article

  • XML serialization options in .NET

    - by Borek
    I'm building a service that returns an XML (no SOAP, no ATOM, just plain old XML). Say that I have my domain objects already filled with data and just need to transform them to the XML format. What options do I have on .NET? Requirements: The transformation is not 1:1. Say that I have an Address property of type Address with nested properties like Line1, City, Postcode etc. This may need to result in an XML like <xaddr city="...">Line1, Postcode</xaddr>, i.e. quite different. Some XML elements/attributes are conditional, for example, if a Customer is under 18, the XML needs to contain some additional information. I only need to serialize the objects to XML, the other direction (XML to objects) is not important Some technologies, i.e. Data Contracts use .NET attributes. Other means of configuration (external XML config, buddy classes etc.) would be a plus. Here are the options as I see them as the moment. Corrections / additions will be very welcome. String concatenation - forget it, it was a joke :) Linq 2 XML - complete control but quite a lot of hand written code, would need good suite of unit tests View engines in ASP.NET MVC (or even Web Forms theoretically), the logic being in controllers. It's a question how to structure it, I can have simple rules engine in my controller(s) and one view template per each possible output, or have the decision logic directly in the template. Both have upsides and downsides. XML Serialization - I'm not sure about the flexibility here Data Contracts from WCF - not sure about the flexibility either, plus would they work in a simple ASP.NET MVC app (non-WCF service)? Are they a super-set of the standard XML serialization now? If it exists, some XML-to-object mapper. The more I think about it the more I think I'm looking for something like this but I couldn't find anything appropriate. Any comments / other options?

    Read the article

  • How can I create a Searchstring for a Google AJAX Search API?

    - by elmaso
    Hello, i have this code to get the search resutls from the api: querygoogle.php: <?php session_start(); // Here's the Google AJAX Search API url for curl. It uses Google Search's site:www.yourdomain.com syntax to search in a specific site. I used $_SERVER['HTTP_HOST'] to find my domain automatically. Change $_POST['searchquery'] to your posted search query $url = 'http://ajax.googleapis.com/ajax/services/search/web?rsz=large&v=1.0&start=20&q=' . urlencode('' . $_POST['searchquery']); // use fopen and fread to pull Google's search results $handle = fopen($url, 'rb'); $body = ''; while (!feof($handle)) { $body .= fread($handle, 8192); } fclose($handle); // now $body is the JSON encoded results. We need to decode them. $json = json_decode($body); // now $json is an object of Google's search results and we need to iterate through it. foreach($json->responseData->results as $searchresult) { if($searchresult->GsearchResultClass == 'GwebSearch') { $formattedresults .= ' <div class="searchresult"> <h3><a href="' . $searchresult->unescapedUrl . '">' . $searchresult->titleNoFormatting . '</a></h3> <p class="resultdesc">' . $searchresult->content . '</p> <p class="resulturl">' . $searchresult->visibleUrl . '</p> </div>'; } } $_SESSION['googleresults'] = $formattedresults; header('Location: ' . $_SERVER['HTTP_REFERER']); exit; ?> search.php <?php session_start(); ?> <form method="post" action="querygoogle.php"> <label for="searchquery"><span class="caption">Search this site</span> <input type="text" size="20" maxlength="255" title="Enter your keywords and click the search button" name="searchquery" /></label> <input type="submit" value="Search" /> </form> <?php if(!empty($_SESSION['googleresults'])) { echo $_SESSION['googleresults']; unset($_SESSION['googleresults']); } ?> but with this code, I cant add a searchstring.. how can i add a search string like search.php?search=keyword ? thanks

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

  • WinForm-style Invoke() in unmanaged C++

    - by Matt Green
    I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.) However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things. A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function. By nicer, I'll mention that I'm specifically trying to avoid code of the following form: if(! this->IsUIThread()) { Invoke(MainWindowPresenter::OnTracksAdded, e); return; } being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it. My implementation consists of: DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries. UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage(). Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them. Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?

    Read the article

  • representing an XML config file with an IXmlSerializable class

    - by Sarah Vessels
    I'm writing in C# and trying to represent an XML config file through an IXmlSerializable class. I'm unsure how to represent the nested elements in my config file, though, such as logLevel: <?xml version="1.0" encoding="utf-8" ?> <configuration> <logging> <logLevel>Error</logLevel> </logging> <credentials> <user>user123</user> <host>localhost</host> <password>pass123</password> </credentials> <credentials> <user>user456</user> <host>my.other.domain.com</host> <password>pass456</password> </credentials> </configuration> There is an enum called LogLevel that represents all the possible values for the logLevel tag. The tags within credentials should all come out as strings. In my class, called DLLConfigFile, I had the following: [XmlElement(ElementName="logLevel", DataType="LogLevel")] public LogLevel LogLevel; However, this isn't going to work because <logLevel> isn't within the root node of the XML file, it's one node deeper in <logging>. How do I go about doing this? As for the <credentials> nodes, my guess is I will need a second class, say CredentialsSection, and have a property such as the following: [XmlElement(ElementName="credentials", DataType="CredentialsSection")] public CredentialsSection[] AllCredentials; Edit: okay, I tried Robert Love's suggestion and created a LoggingSection class. However, my test fails: var xs = new XmlSerializer(typeof(DLLConfigFile)); using (var stream = new FileStream(_configPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (var streamReader = new StreamReader(stream)) { XmlReader reader = new XmlTextReader(streamReader); var file = (DLLConfigFile)xs.Deserialize(reader); Assert.IsNotNull(file); LoggingSection logging = file.Logging; Assert.IsNotNull(logging); // fails here LogLevel logLevel = logging.LogLevel; Assert.IsNotNull(logLevel); Assert.AreEqual(EXPECTED_LOG_LEVEL, logLevel); } } The config file I'm testing with definitely has <logging>. Here's what the classes look like: [Serializable] [XmlRoot("logging")] public class LoggingSection : IXmlSerializable { public XmlSchema GetSchema() { return null; } [XmlElement(ElementName="logLevel", DataType="LogLevel")] public LogLevel LogLevel; public void ReadXml(XmlReader reader) { LogLevel = (LogLevel)Enum.Parse(typeof(LogLevel), reader.ReadString()); } public void WriteXml(XmlWriter writer) { writer.WriteString(Enum.GetName(typeof(LogLevel), LogLevel)); } } [Serializable] [XmlRoot("configuration")] public class DLLConfigFile : IXmlSerializable { [XmlElement(ElementName="logging", DataType="LoggingSection")] public LoggingSection Logging; }

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >