Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 216/240 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Java - Is Set.contains() broken on OpenJDK 6?

    - by Peter
    Hey, I've come across a really strange problem. I have written a simple Deck class which represents a standard 52 card deck of playing cards. The class has a method missingCards() which returns the set of all cards which have been drawn from the deck. If I try and compare two identical sets of missing cards using .equals() I'm told they are different, and if I check to see if a set contains an element that I know is there using .contains() I am returned false. Here is my test code: public void testMissingCards() { Deck deck = new Deck(true); Set<Card> drawnCards = new HashSet<Card>(); drawnCards.add(deck.draw()); drawnCards.add(deck.draw()); drawnCards.add(deck.draw()); Set<Card> missingCards = deck.missingCards(); System.out.println(drawnCards); System.out.println(missingCards); Card c1 = null; for (Card c : drawnCards){ c1 = c; } System.out.println("C1 is "+c1); for (Card c : missingCards){ System.out.println("C is "+c); System.out.println("Does c1.equal(c) "+c1.equals(c)); System.out.println("Does c.equal(c1) "+c.equals(c1)); } System.out.println("Is c1 in missingCards "+missingCards.contains(c1)); assertEquals("Deck confirm missing cards",drawnCards,missingCards); } (Edit: Just for clarity I added the two loops after I noticed the test failing. The first loop pulls out a card from drawnCards and then this card is checked against every card in missingCards - it always matches one, so that card must be contained in missingCards. However, missingCards.contains() fails) And here is an example of it's output: [5C, 2C, 2H] [2C, 5C, 2H] C1 is 2H C is 2C Does c1.equal(c) false Does c.equal(c1) false C is 5C Does c1.equal(c) false Does c.equal(c1) false C is 2H Does c1.equal(c) true Does c.equal(c1) true Is c1 in missingCards false I am completely sure that the implementation of .equals on my card class is correct and, as you can see from the output it does work! What is going on here? Cheers, Pete

    Read the article

  • Best method for converting several sets of numbers with several different ratios

    - by C Patton
    I'm working on an open-source harm reduction application for opioid addicts. One of the features in this application is the conversion (in mg/mcg) between common opioids, so people don't overdose by accident. If you're morally against opioid addiction and wouldn't respond because of your morals, please consider that this application is for HARM REDUCTION.. So people don't end up dead. I have this data.. 3mg morphine IV = 10mcg fentanyl IV 2mg morphine oral = 1mg oxycodone oral 3mg oral morphine = 1mg oxymorphone oral 7.0mg morphine oral = 1mg hydromorphone oral 1mg morphine iv = .10mg oxymorphone iv 1mg morphine oral = 1mg hydrocodone oral 1mg morphine oral = 6.67mg codeine oral 1mg morphine oral = .10mg methadone oral And I have a textbox that is the source dosage in mg (a double) that the user can enter in. Underneath this, I have radio boxes for the source substance (ie: morphine) and the destination substance (ie oxycodone) for conversion.. I've been trying to think of the most efficient way to do this, but nearly every seems sloppy. If I were to do something like public static double MorphinetoOxycodone(string morphineValue) { double morphine = Double.Parse(morphineValue); return (morphine / 2 ); } I would also have to make a function for OxycodonetoMorphine, OxycodonetoCodeine, etc.. and then would end up with dozens functions.. There must be an easier way than this that I'm missing. If you'll notice, all of my conversions use morphine as the base value.. what might be the easiest way to use the morphine value to convert one opioid to another? For example, if 1mg morphine oral is equal to 1mg hydrocodone and 1mg morphine oral is equal to .10mg methadone, wouldn't I just multiply 1*.10 to get the hydrocodone-methadone value? Implementing this idea is what I'm having the most trouble with. Any help would be GREATLY appreciated.. and if you'd like, I would add your name/nickname to the credits in this program. It's possible that many, many people around the world will use this (I'm translating it into several languages as well) and to know that your work could've helped an addict from dying.. I think that's a great thing :) -cory

    Read the article

  • NonUniqueObjectException during DAO integration test?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts my DAO integration tests are failing with org.hibernate.NonUniqueObjectException. I do tend to re-use the same object (same ID even) over and over for all the different tests and I am sure that is the cause, but I can see in the logs that Spring Test and Atomikos are clearly rolling back the transaction associated with each test method. I would have thought the rollback would have also cleared the persistence context too. On a hunch, I added an a call to dao.clear() at the beginning of the faulty test methods and the problem went away!! Rollback doesn't clear the persistence context...hmmm.... Not sure if this is relevant, but I see a possible autocommit setting problem in the log file: [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows Because I am using connection pooling, I figure that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • Information Gain and Entropy

    - by dhorn
    I recently read this question regarding information gain and entropy. I think I have a semi-decent grasp on the main idea, but I'm curious as what to do with situations such as follows: If we have a bag of 7 coins, 1 of which is heavier than the others, and 1 of which is lighter than the others, and we know the heavier coin + the lighter coin is the same as 2 normal coins, what is the information gain associated with picking two random coins and weighing them against each other? Our goal here is to identify the two odd coins. I've been thinking this problem over for a while, and can't frame it correctly in a decision tree, or any other way for that matter. Any help? EDIT: I understand the formula for entropy and the formula for information gain. What I don't understand is how to frame this problem in a decision tree format. EDIT 2: Here is where I'm at so far: Assuming we pick two coins and they both end up weighing the same, we can assume our new chances of picking H+L come out to 1/5 * 1/4 = 1/20 , easy enough. Assuming we pick two coins and the left side is heavier. There are three different cases where this can occur: HM: Which gives us 1/2 chance of picking H and a 1/4 chance of picking L: 1/8 HL: 1/2 chance of picking high, 1/1 chance of picking low: 1/1 ML: 1/2 chance of picking low, 1/4 chance of picking high: 1/8 However, the odds of us picking HM are 1/7 * 5/6 which is 5/42 The odds of us picking HL are 1/7 * 1/6 which is 1/42 And the odds of us picking ML are 1/7 * 5/6 which is 5/42 If we weight the overall probabilities with these odds, we are given: (1/8) * (5/42) + (1/1) * (1/42) + (1/8) * (5/42) = 3/56. The same holds true for option B. option A = 3/56 option B = 3/56 option C = 1/20 However, option C should be weighted heavier because there is a 5/7 * 4/6 chance to pick two mediums. So I'm assuming from here I weight THOSE odds. I am pretty sure I've messed up somewhere along the way, but I think I'm on the right path! EDIT 3: More stuff. Assuming the scale is unbalanced, the odds are (10/11) that only one of the coins is the H or L coin, and (1/11) that both coins are H/L Therefore we can conclude: (10 / 11) * (1/2 * 1/5) and (1 / 11) * (1/2) EDIT 4: Going to go ahead and say that it is a total 4/42 increase.

    Read the article

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • Array within Form collecting multiple values with the same name possible?

    - by JM4
    Good afternoon, I will first start with the goal I am trying to accomplish and then give a very basic sample of what I need to do. Goal Instead of collecting several variables and naming them with keys individually, I have decided to give in and use an array structure to handle all inputs of the same type and rules. Once I have the variables, I will validate against them and if 'ok' store them in a MySQL table. The table will hold consumer information and will need to store multiple rows of the same type of information. First Pass I will leave out the validation portion of this question because I feel I need to first understand the basics. <form action="?" method="POST" name="Form"> Member 1 First Name:<input type="text" name="MemberFirstName[]" /><br /> Member 1 Last Name: <input type="text" name="MemberLastName[]" /><br /> Member 1 Email: <input type="text" name="MemberEmail[]" /><br /> Member 2 First Name:<input type="text" name="MemberFirstName[]" /><br /> Member 2 Last Name: <input type="text" name="MemberLastName[]" /><br /> Member 2 Email: <input type="text" name="MemberEmail[]" /><br /> Member 3 First Name:<input type="text" name="MemberFirstName[]" /><br /> Member 3 Last Name: <input type="text" name="MemberLastName[]" /><br /> Member 3 Email: <input type="text" name="MemberEmail[]" /><br /> <input type="submit" name="submit" value="Continue" /> </form> I am hoping that each input given for First Name (a required field) will generate a unique key for that particular entry and not overwrite any data entered. Because I am carrying information from page to page (checkout form), I am turning the POST variables into SESSION variables then storing in a mysql database in the end. My hope is to have: <?php $conn = mysql_connect("localhost", "username", "password"); mysql_select_db("DBname",$conn); $sql = "INSERT INTO tablename VALUES ('$_SESSION[Member1FirstName]', '$_SESSION[Member1LastName]', '$_SESSION[Member1Email]', '$_SESSION[Member2FirstName]', '$_SESSION[Member2LastName]', '$_SESSION[Member2Email]', '$_SESSION[Member1FirstName]', '$_SESSION[Member3LastName]', '$_SESSION[Member3Email]')"; $result = mysql_query($sql, $conn) or die(mysql_error()); Header ("Location: completed.php"); ?> Where Member1, Member2, and Member3 values will appear on their own row within the table. I KNOW my code is wrong but I am giving a first shot at the overall business purpose I am trying to achieve and trying to learn how to code the right way. I am very, very new to programming so any 'baby advice' is greatly appreciated.

    Read the article

  • How to Sync CI (Hudson) Activity into an existing automated Build Process (phing, svn)?

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken - IE deletion) the number of branches we have can easily grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', which is then tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention & (in advance) for your help!

    Read the article

  • Starting company and getting a good deal

    - by Dan
    Hi, I'm having the possibility to start a small company with someone else. I'm a software programmer and been working on a field where there isn't much competition, at least in the platform I'm developing, which is the one with highest market share. This other person comes from marketing so he would find / provide clients and be in charge of the business side. So initially it would be me, the tech guy with the knowledge to develop our product and products based on my development, and this person who would become CEO (basically a 50/50 share). The idea is getting a product on our own, but also perform projects for others in order to get some cash. The problem is that his idea is doing this without initial capital. He tells me that he could get some customers from past business (which is mostly true as he has been in the business for some time) and then with the money obtained from such development, we could hire some freelance developers and build our own platform. I've already discussed with him that I don't find this the best way, provided that we need to compete against other companies some of who are VC funded and have many developers working fulltime. But most of all, I'm thinking of whether this is a fair deal to do, provided that this person is not providing anything other than clients, and I'd be the one having to do the work and dedicate time, thus also taking most of the risks. In all fairness, I'd expect him to put some initial capital, given that initially I'm putting some of my code which in some way is money given the time I've dedicated to it. So the question here is, does this seem fair to you? Is this the way it is usually done? My only concern here is that I don't have previous experience with such deals, and I ignore whether this is the usual thing to do in other cases. Certainly having a marketing person helps a lot. As you probably know being a programmer doesn't make me the most indicated person for marketing ;-) but at the same time I'm not sure if this could be a good deal or I'd just be making a pretty inconvenient deal. Hope I made my point clear, but feel free to let me know if more details are needed. Thanks in advance

    Read the article

  • Memory management with Objective-C Distributed Objects: my temporary instances live forever!

    - by jkp
    I'm playing with Objective-C Distributed Objects and I'm having some problems understanding how memory management works under the system. The example given below illustrates my problem: Protocol.h #import <Foundation/Foundation.h> @protocol DOServer - (byref id)createTarget; @end Server.m #import <Foundation/Foundation.h> #import "Protocol.h" @interface DOTarget : NSObject @end @interface DOServer : NSObject < DOServer > @end @implementation DOTarget - (id)init { if ((self = [super init])) { NSLog(@"Target created"); } return self; } - (void)dealloc { NSLog(@"Target destroyed"); [super dealloc]; } @end @implementation DOServer - (byref id)createTarget { return [[[DOTarget alloc] init] autorelease]; } @end int main() { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DOServer *server = [[DOServer alloc] init]; NSConnection *connection = [[NSConnection new] autorelease]; [connection setRootObject:server]; if ([connection registerName:@"test-server"] == NO) { NSLog(@"Failed to vend server object"); } else [[NSRunLoop currentRunLoop] run]; [pool drain]; return 0; } Client.m #import <Foundation/Foundation.h> #import "Protocol.h" int main() { unsigned i = 0; for (; i < 3; i ++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; id server = [NSConnection rootProxyForConnectionWithRegisteredName:@"test-server" host:nil]; [server setProtocolForProxy:@protocol(DOServer)]; NSLog(@"Created target: %@", [server createTarget]); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; [pool drain]; } return 0; } The issue is that any remote objects created by the root proxy are not released when their proxy counterparts in the client go out of scope. According to the documentation: When an object’s remote proxy is deallocated, a message is sent back to the receiver to notify it that the local object is no longer shared over the connection. I would therefore expect that as each DOTarget goes out of scope (each time around the loop) it's remote counterpart would be dellocated, since there is no other reference to it being held on the remote side of the connection. In reality this does not happen: the temporary objects are only deallocate when the client application quits, or more accurately, when the connection is invalidated. I can force the temporary objects on the remote side to be deallocated by explicitly invalidating the NSConnection object I'm using each time around the loop and creating a new one but somehow this just feels wrong. Is this the correct behaviour from DO? Should all temporary objects live as long as the connection that created them? Are connections therefore to be treated as temporary objects which should be opened and closed with each series of requests against the server? Any insights would be appreciated.

    Read the article

  • Coolstack MySQL Crash Unable to Restart

    - by rayblasdel
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it. The following is an excerpt from the MySQL Error log 100530 22:44:03 mysqld_safe mysqld from pid file /dbpool1/data/database.soliaonline.com.pid ended 100530 22:44:04 mysqld_safe Starting mysqld daemon with databases from /dbpool1/data InnoDB: Log scan progressed past the checkpoint lsn 32 727170612 100530 22:44:13 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 32 727200901 100530 22:44:14 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 100530 22:44:14 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=209715200 read_buffer_size=1048576 max_used_connections=0 max_threads=10000 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 31024253 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. enter code here

    Read the article

  • Mocking HtmlHelper throws NullReferenceException

    - by Matt Austin
    I know that there are a few questions on StackOverflow on this topic but I haven't been able to get any of the suggestions to work for me. I've been banging my head against this for two days now so its time to ask for help... The following code snippit is a simplified unit test to demonstrate what I'm trying to do, which is basically call RadioButtonFor in the Microsoft.Web.Mvc assembly in a unit test. var model = new SendMessageModel { SendMessageType = SendMessageType.Member }; var vd = new ViewDataDictionary(model); vd.TemplateInfo = new TemplateInfo { HtmlFieldPrefix = string.Empty }; var controllerContext = new ControllerContext(new Mock<HttpContextBase>().Object, new RouteData(), new Mock<ControllerBase>().Object); var viewContext = new Mock<ViewContext>(new object[] { controllerContext, new Mock<IView>().Object, vd, new TempDataDictionary(), new Mock<TextWriter>().Object }); viewContext.Setup(v => v.View).Returns(new Mock<IView>().Object); viewContext.Setup(v => v.ViewData).Returns(vd).Callback(() => {throw new Exception("ViewData extracted");}); viewContext.Setup(v => v.TempData).Returns(new TempDataDictionary()); viewContext.Setup(v => v.Writer).Returns(new Mock<TextWriter>().Object); viewContext.Setup(v => v.RouteData).Returns(new RouteData()); viewContext.Setup(v => v.HttpContext).Returns(new Mock<HttpContextBase>().Object); viewContext.Setup(v => v.Controller).Returns(new Mock<ControllerBase>().Object); viewContext.Setup(v => v.FormContext).Returns(new FormContext()); var mockContainer = new Mock<IViewDataContainer>(); mockContainer.Setup(x => x.ViewData).Returns(vd); var helper = new HtmlHelper<ISendMessageModel>(viewContext.Object, mockContainer.Object, new RouteCollection()); helper.RadioButtonFor(m => m.SendMessageType, "Member", cssClass: "selector"); If I remove the cssClass parameter then the code works ok but fails consistently when adding additional parameters. I've tried every combination of mocking, instantiating concrete types and using fakes that I can think off but I always get a NullReferenceException when I call RadioButtonFor. Any help hugely appreciated!!

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • What are the weaknesses of this user authentication method?

    - by byronh
    I'm developing my own PHP framework. It seems all the security articles I have read use vastly different methods for user authentication than I do so I could use some help in finding security holes. Some information that might be useful before I start. I use mod_rewrite for my MVC url's. Passwords are sha1 and md5 encrypted with 24 character salt unique to each user. mysql_real_escape_string and/or variable typecasting on everything going in, and htmlspecialchars on everything coming out. Step-by step process: Top of every page: session_start(); session_regenerate_id(); If user logs in via login form, generate new random token to put in user's MySQL row. Hash is generated based on user's salt (from when they first registered) and the new token. Store the hash and plaintext username in session variables, and duplicate in cookies if 'Remember me' is checked. On every page, check for cookies. If cookies set, copy their values into session variables. Then compare $_SESSION['name'] and $_SESSION['hash'] against MySQL database. Destroy all cookies and session variables if they don't match so they have to log in again. If login is valid, some of the user's information from the MySQL database is stored in an array for easy access. So far, I've assumed that this array is clean so when limiting user access I refer to user.rank and deny access if it's below what's required for that page. I've tried to test all the common attacks like XSS and CSRF, but maybe I'm just not good enough at hacking my own site! My system seems way too simple for it to actually be secure (the security code is only 100 lines long). What am I missing? I've also spent alot of time searching for the vulnerabilities with mysql_real_escape string but I haven't found any information that is up-to-date (everything is from several years ago at least and has apparently been fixed). All I know is that the problem was something to do with encoding. If that problem still exists today, how can I avoid it? Any help will be much appreciated.

    Read the article

  • VB6 ADO Command to SQL Server

    - by Emtucifor
    I'm getting an inexplicable error with an ADO command in VB6 run against a SQL Server 2005 database. Here's some code to demonstrate the problem: Sub ADOCommand() Dim Conn As ADODB.Connection Dim Rs As ADODB.Recordset Dim Cmd As ADODB.Command Dim ErrorAlertID As Long Dim ErrorTime As Date Set Conn = New ADODB.Connection Conn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Initial Catalog=database;Data Source=server" Conn.CursorLocation = adUseClient Conn.Open Set Rs = New ADODB.Recordset Rs.CursorType = adOpenStatic Rs.LockType = adLockReadOnly Set Cmd = New ADODB.Command With Cmd .Prepared = False .CommandText = "ErrorAlertCollect" .CommandType = adCmdStoredProc .NamedParameters = True .Parameters.Append .CreateParameter("@ErrorAlertID", adInteger, adParamOutput) .Parameters.Append .CreateParameter("@CreateTime", adDate, adParamOutput) Set .ActiveConnection = Conn Rs.Open Cmd ErrorAlertID = .Parameters("@ErrorAlertID").Value ErrorTime = .Parameters("@CreateTime").Value End With Debug.Print Rs.State ' Shows 0 - Closed Debug.Print Rs.RecordCount ' Of course this fails since the recordset is closed End Sub So this code was working not too long ago but now it's failing on the last line with the error: Run-time error '3704': Operation is not allowed when the object is closed Why is it closed? I just opened it and the SP returns rows. I ran a trace and this is what the ADO library is actually submitting to the server: declare @p1 int set @p1=1 declare @p2 datetime set @p2=''2010-04-22 15:31:07:770'' exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running this as a separate batch from my query editor yields: Msg 102, Level 15, State 1, Line 4 Incorrect syntax near '2010'. Of course there's an error. Look at the double single quotes in there. What the heck could be causing that? I tried using adDBDate and adDBTime as data types for the date parameter, and they give the same results. When I make the parameters adParamInputOutput, then I get this: declare @p1 int set @p1=default declare @p2 datetime set @p2=default exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running that as a separate batch yields: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'default'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'default'. What the heck? SQL Server doesn't support this kind of syntax. You can only use the DEFAULT keyword in the actual SP execution statement. I should note that removing the extra single quotes from the above statement makes the SP run fine. ... Oh my. I just figured it out. I guess it's worth posting anyway.

    Read the article

  • Difference in DocumentBuilder.parse when using JRE 1.5 and JDK 1.6

    - by dhiller
    Recently at last we have switched our projects to Java 1.6. When executing the tests I found out that using 1.6 a SAXParseException is not thrown which has been thrown using 1.5. Below is my test code to demonstrate the problem. import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.SchemaFactory; import org.junit.Test; import org.xml.sax.InputSource; import org.xml.sax.SAXParseException; /** * Test class to demonstrate the difference between JDK 1.5 to JDK 1.6. * * Seen on Linux: * * <pre> * #java version "1.6.0_18" * Java(TM) SE Runtime Environment (build 1.6.0_18-b07) * Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) * </pre> * * Seen on OSX: * * <pre> * java version "1.6.0_17" * Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) * Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) * </pre> * * @author dhiller (creator) * @author $Author$ (last editor) * @version $Revision$ * @since 12.03.2010 11:32:31 */ public class TestXMLValidation { /** * Tests the schema validation of an XML against a simple schema. * * @throws Exception * Falls ein Fehler auftritt * @throws junit.framework.AssertionFailedError * Falls eine Unit-Test-Pruefung fehlschlaegt */ @Test(expected = SAXParseException.class) public void testValidate() throws Exception { final StreamSource schema = new StreamSource( new StringReader( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" " + "elementFormDefault=\"qualified\" xmlns:xsd=\"undefined\">" + "<xs:element name=\"Test\"/>" + "</xs:schema>" ) ); final String xml = "<Test42/>"; final DocumentBuilderFactory newFactory = DocumentBuilderFactory.newInstance(); newFactory.setSchema( SchemaFactory.newInstance( "http://www.w3.org/2001/XMLSchema" ).newSchema( schema ) ); final DocumentBuilder documentBuilder = newFactory.newDocumentBuilder(); documentBuilder.parse( new InputSource( new StringReader( xml ) ) ); } } When using a JVM 1.5 the test passes, on 1.6 it fails with "Expected exception SAXParseException". The Javadoc of the DocumentBuilderFactory.setSchema(Schema) Method says: When errors are found by the validator, the parser is responsible to report them to the user-specified ErrorHandler (or if the error handler is not set, ignore them or throw them), just like any other errors found by the parser itself. In other words, if the user-specified ErrorHandler is set, it must receive those errors, and if not, they must be treated according to the implementation specific default error handling rules. The Javadoc of the DocumentBuilder.parse(InputSource) method says: BTW: I tried setting an error handler via setErrorHandler, but there still is no exception. Now my question: What has changed to 1.6 that prevents the schema validation to throw a SAXParseException? Is it related to the schema or to the xml that I tried to parse?

    Read the article

  • PHP 'instanceof' failing with class constant

    - by Nathan Loding
    I'm working on a framework that I'm trying to type as strongly as I possibly can. (I'm working within PHP and taking some of the ideas that I like from C# and trying to utilize them within this framework.) I'm creating a Collection class that is a collection of domain entities/objects. It's kinda modeled after the List<T> object in .Net. I've run into an obstacle that is preventing me from typing this class. If I have a UserCollection, it should only allow User objects into it. If I have a PostCollection, it should only allow Post objects. All Collections in this framework need to have certain basic functions, such as add, remove, iterate. I created an interface, but found that I couldn't do the following: interface ICollection { public function add($obj) } class PostCollection implements ICollection { public function add(Post $obj) {} } This broke it's compliance with the interface. But I can't have the interface strongly typed because then all Collections are of the same type. So I attempted the following: interface ICollection { public function add($obj) } abstract class Collection implements ICollection { const type = 'null'; } class PostCollection { const type = 'Post'; public function add($obj) { if(!($obj instanceof self::type)) { throw new UhOhException(); } } } When I attempt to run this code, I get syntax error, unexpected T_STRING, expecting T_VARIABLE or '$' on the instanceof statement. A little research into the issue and it looks like the root of the cause is that $obj instanceof self is valid to test against the class. It appears that PHP doesn't process the entire self::type constant statement in the expression. Adding parentheses around the self::type variable threw an error regarding an unexpected '('. An obvious workaround is to not make the type variable a constant. The expression $obj instanceof $this->type works just fine (if $type is declared as a variable, of course). I'm hoping that there's a way to avoid that, as I'd like to define the value as a constant to avoid any possible change in the variable later. Any thoughts on how I can achieve this, or have I take PHP to it's limit in this regard? Is there a way of "escaping" or encapsulating self::this so that PHP won't die when processing it?

    Read the article

  • On Redirect - Failed to generate a user instance of SQL Server...

    - by Craig Russell
    Hello (this is a long post sorry), I am writing a application in ASP.NET MVC 2 and I have reached a point where I am receiving this error when I connect remotely to my Server. Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. I thought I had worked around this problem locally, as I was getting this error in debug when site was redirected to a baseUrl if a subdomain was invalid using this code: protected override void Initialize(RequestContext requestContext) { string[] host = requestContext.HttpContext.Request.Headers["Host"].Split(':'); _siteProvider.Initialise(host, LiveMeet.Properties.Settings.Default["baseUrl"].ToString()); base.Initialize(requestContext); } protected override void OnActionExecuting(ActionExecutingContext filterContext) { if (Site == null) { string[] host = filterContext.HttpContext.Request.Headers["Host"].Split(':'); string newUrl; if (host.Length == 2) newUrl = "http://sample.local:" + host[1]; else newUrl = "http://sample.local"; Response.Redirect(newUrl, true); } ViewData["Site"] = Site; base.OnActionExecuting(filterContext); } public Site Site { get { return _siteProvider.GetCurrentSite(); } } The Site object is returned from a Provider named siteProvider, this does two checks, once against a database containing a list of all available subdomains, then if that fails to find a valid subdomain, or valid domain name, searches a memory cache of reserved domains, if that doesn't hit then returns a baseUrl where all invalid domains are redirected. locally this worked when I added the true to Response.Redirect, assuming a halting of the current execution and restarting the execution on the browser redirect. What I have found in the stack trace is that the error is thrown on the second attempt to access the database. #region ISiteProvider Members public void Initialise(string[] host, string basehost) { if (host[0].Contains(basehost)) host = host[0].Split('.'); Site getSite = GetSites().WithDomain(host[0]); if (getSite == null) { sites.TryGetValue(host[0], out getSite); } _site = getSite; } public Site GetCurrentSite() { return _site; } public IQueryable<Site> GetSites() { return from p in _repository.groupDomains select new Site { Host = p.domainName, GroupGuid = (Guid)p.groupGuid, IsSubDomain = p.isSubdomain }; } #endregion The Linq query ^^^ is hit first, with a filter of WithDomain, the error isn't thrown till the WithDomain filter is attempted. In summary: The error is hit after the page is redirected, so the first iteration is executing as expected (so permissions on the database are correct, user profiles etc) shortly after the redirect when it filters the database query for the possible domain/subdomain of current redirected page, it errors out.

    Read the article

  • Mysterious constraints problem with SQL Server 2000

    - by Ramon
    Hi all I'm getting the following error from a VB NET web application written in VS 2003, on framework 1.1. The web app is running on Windows Server 2000, IIS 5, and is reading from a SQL server 2000 database running on the same machine. System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. at System.Data.DataSet.FailedEnableConstraints() at System.Data.DataSet.EnableConstraints() at System.Data.DataSet.set_EnforceConstraints(Boolean value) at System.Data.DataTable.EndLoadData() at System.Data.Common.DbDataAdapter.FillFromReader(Object data, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) The problem appears when the web app is under a high load. The system runs fine when volume is low, but when the number of requests becomes high, the system starts rejecting incoming requests with the above exception message. Once the problem appears, very few requests actually make it through and get processed normally, about 2 in every 30. The vast majority of requests fail, until a SQL Server restart or IIS reset is performed. The system then start processing requests normally, and after some time it starts throwing the same error. The error occurs when a data adapter runs the Fill() method against a SELECT statement, to populate a strongly-typed dataset. It appears that the dataset does not like the data it is given and throws this exception. This error occurs on various SELECT statements, acting on different tables. I have regenerated the dataset and checked the relevant constraints, as well as the table from which the data is read. Both the dataset definition and the data in the table are fine. Admittedly, the hardware running both the web app and SQL Server 2000 is seriously outdated, considering the numbers of incoming requests it currently receives. The amount of RAM consumed by SQL Server is dynamically allocated, and at peak times SQL Server can consume up to 2.8 GB out of a total of 3.5 GB on the server. At first I suspected some sort of index or database corruption, but after running DBCC CHECKDB, no errors were found in the database. So now I'm wondering whether this error is a result of the hardware limitations of the system. Is it possible for SQL Server to somehow mess up the data it's supposed to pass to the dataset, resulting in constraint violation due to, say, data type/length mismatch? I tried accessing the RowError messages of the data rows in the retrieved dataset tables but I kept getting empty strings. I know that HasErrors = true for the datatables in question. I have not set the EnableConstraints = false, and I don't want to do that. Thanks in advance. Ray

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • Self updating app, wont overwrite existing app, using Android packagemanager?

    - by LokiSinclair
    I know there are plenty of questions about this on here, but I've tried everything (but the correct 'thing', obviously!) and nothing seems to shine any light on the problem I'm having. I've written an app (for a customer), which is designed to be hosted on their own server. The app references a simple text file with the latest version code in it and checks it against it's own version. If it's out of date it goes off and downloads the update. Everything is working as intended up to this point. I use the: Intent i = new Intent(Intent.ACTION_VIEW); i.setDataAndType(Uri.fromFile(outputFile), "application/vnd.android.package-archive"); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); startActivity(i); ...code to start the install process of the newly downloaded .apk file. And that all starts as I would expect. I click on "Install" - when I'm prompted to confirm the overwriting of the current app, with the new. It starts, and then displays: App not installed. And existing package by the same name with a conflicting signature is already installed. Now I'm aware that Android can't have multiple applications sharing the same package name, which is fine, but nothing comes up in LogCat and I can only assume that the OS is annoyed at me attempting to 'update' my app, even though I'm going through all the correct channels and using the inbuilt package manager to do it for me! Can anyone tell me what the OS is moaning about? I'm not attempting to install two apps side by side, I want it to update it, which it starts to do, and then gets really confused. Is it something to do with me using the same keystore for signing the packages? I highly doubt it as I've used the same keystores previously to handle updates to games and the like, but I just can't figure out what it's complaining about. Hopefully someone out there has had this issue and solved it, and can point me in the right direction. I'm flying a bit blind with the limited information it's giving me :( Cheers.

    Read the article

  • C++ addition overload ambiguity

    - by Nate
    I am coming up against a vexing conundrum in my code base. I can't quite tell why my code generates this error, but (for example) std::string does not. class String { public: String(const char*str); friend String operator+ ( const String& lval, const char *rval ); friend String operator+ ( const char *lval, const String& rval ); String operator+ ( const String& rval ); }; The implementation of these is easy enough to imagine on your own. My driver program contains the following: String result, lval("left side "), rval("of string"); char lv[] = "right side ", rv[] = "of string"; result = lv + rval; printf(result); result = (lval + rv); printf(result); Which generates the following error in gcc 4.1.2: driver.cpp:25: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second: String.h:22: note: candidate 1: String operator+(const String&, const char*) String.h:24: note: candidate 2: String String::operator+(const String&) So far so good, right? Sadly, my String(const char *str) constructor is so handy to have as an implicit constructor, that using the explicit keyword to solve this would just cause a different pile of problems. Moreover... std::string doesn't have to resort to this, and I can't figure out why. For example, in basic_string.h, they are declared as follows: template<typename _CharT, typename _Traits, typename _Alloc> basic_string<_CharT, _Traits, _Alloc> operator+(const basic_string<_CharT, _Traits, _Alloc>& __lhs, const basic_string<_CharT, _Traits, _Alloc>& __rhs) template<typename _CharT, typename _Traits, typename _Alloc> basic_string<_CharT,_Traits,_Alloc> operator+(const _CharT* __lhs, const basic_string<_CharT,_Traits,_Alloc>& __rhs); and so on. The basic_string constructor is not declared explicit. How does this not cause the same error I'm getting, and how can I achieve the same behavior??

    Read the article

  • Combined sign in and registration page?

    - by Ryan
    This is somewhat against rails convention but I am trying to have one controller that manages both user session authentication and user registration. I am having troubles figuring out how to go about this. So far I am merging the User Controller and the Sessions Controller and having the 'new' method deliver both a new usersession and a new user instance. With the new routes in rails 3 though, I am having trouble figuring out how to generate forms for these items. Below is the code: user_controller.rb class UserController < ApplicationController def new @user_session = UserSession.new @user = User.new end def create_user @user = User.new(params[:user]) if @user.save flash[:notice] = "Account Successfully Registered" redirect_back_or_default signup_path else render :action => new end end def create_session @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" redirect_back_or_default login_path else render :action => new end end end views/user/new.html.erb <div id="login_section"> <% form_for @user_session do |f| -%> <%= f.label :email_address, "Email Address" %> <%= f.text_field :email %> <%= f.label :password, "Password" %> <%= f.text_field :password %> <%= f.submit "Login", :disable_with => 'Logining...' %> <% end -%> </div> <div id="registration_section"> <% form_for @user do |f| -%> <%= f.label :email_address, "Email Address" %> <%= f.text_field :email %> <%= f.label :password, "Password" %> <%= f.text_field :password %> <%= f.label :password_confirmation, "Password Confirmation" %> <%= f.text_field :password_confirmation %> <%= f.submit "Register", :disable_with => 'Logining...' %> <% end -%> </div> I imagine I will need to use :url = something for those forms, but I am unsure how to specify. Within routes.rb I have yet to specify either Usersor UserSessions as resources (not convinced that this is the best way to do it... but I could be). I would like, however, the registration and login on the same page and have implemented this by doing the following: routes.rb match 'signup' => 'user#new' match 'login' => 'user#new' What's the best way to go about solving this?

    Read the article

  • Nested Transaction issues within custom Windows Service

    - by pdwetz
    I have a custom Windows Service I recently upgraded to use TransactionScope for nested transactions. It worked fine locally on my old dev machine (XP sp3) and on a test server (Server 2003). However, it fails on my new Windows 7 machine as well as on 2008 Server. It was targeting 2.0 framework; I tried targeting 3.5 instead, but it still fails. The strange part is really in how it fails; no exception is thrown. The service itself merely times out. I added tracing code, and it fails when opening the connection for Database lookup #2 below. I also enabled tracing for System.Transactions; it literally cuts out partway while writing the block for the failed connection. We ran a SQL trace, but only the first lookup shows up. I put in code traces, and it gets to the trace the line before the second lookup, but nothing after. I've had the same experience hitting two different SQL servers (both are SQL 2005 running on Server 2003). The connection string is utilizing a SQL account (not Windows integration). All connections are against the same database in this case, but given the nature of the code it is being escalated to MSDTC. Here's the basic code structure: TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew, options)) { // Database lookup #1 TransactionOptions options = new TransactionOptions(); options.IsolationLevel = Transaction.Current != null ? Transaction.Current.IsolationLevel : System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Database lookup #2; fails on connection.Open() // Database save (never reached) scope.Complete();<br/> } scope.Complete();<br/> } My local firewall is disabled. The service normally runs using Network Service, but I also tried my user account (same results). The short of it is that I use the same general technique widely in my web applications and haven't had any issues. I pulled out the code and ran it fine within a local Windows Form application. If anyone has any additional debugging ideas (or, even better, solutions) I'd love to hear them.

    Read the article

  • GLKit Memory Leak copywithZone

    - by TommyT39
    Running the instruments utility against the game I'm writing shows a bunch of memory leaks related to copy with Zone when I cycle through an array and draw some simple cube objects. Im not sure the best way to track this down as I'm new to OpenGL programming. My program is using ARC and is set to build for IOS 5. I am initializing GLKit to use OPenGl 2.0 and using the BafeEffect so I don't have to write my own shaders etc.. This shouldn't be rocket science. Im guessing that I must be not releasing something within the draw function. Below is the code to my draw function. Could you guys take a look and see if anything stands out as the problem? One other thing to note is that I'm using 15 different textures, the cubes can be 1 of 15 different ones. I have a property set on the cube class for the texture and I set it as I create the cube in there array. But I do load all 15 when my programs view did load starts.They are small .jps files that are less than 75k each and each cube uses the same texture all the way around so shouldn't be too big of an issue. Here is the code to my draw function: - (void)draw { GLKMatrix4 xRotationMatrix = GLKMatrix4MakeXRotation(rotation.x); GLKMatrix4 yRotationMatrix = GLKMatrix4MakeYRotation(rotation.y); GLKMatrix4 zRotationMatrix = GLKMatrix4MakeZRotation(rotation.z); GLKMatrix4 scaleMatrix = GLKMatrix4MakeScale(scale.x, scale.y, scale.z); GLKMatrix4 translateMatrix = GLKMatrix4MakeTranslation(position.x, position.y, position.z); GLKMatrix4 modelMatrix = GLKMatrix4Multiply(translateMatrix,GLKMatrix4Multiply(scaleMatrix,GLKMatrix4Multiply(zRotationMatrix, GLKMatrix4Multiply(yRotationMatrix, xRotationMatrix)))); GLKMatrix4 viewMatrix = GLKMatrix4MakeLookAt(0, 0, 1, 0, 0, -5, 0, 1, 0); effect.transform.modelviewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix); effect.transform.projectionMatrix = GLKMatrix4MakePerspective(0.125*M_TAU, 1.0, 2, 0); effect.texture2d0.name = wallTexture.name; [effect prepareToDraw]; glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, triangleVertices); glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, 18); glDisableVertexAttribArray(GLKVertexAttribPosition); glDisableVertexAttribArray(GLKVertexAttribTexCoord0); }

    Read the article

  • Perl: Compare and edit underlying structure in hash

    - by Mahfuzur Rahman Pallab
    I have a hash of complex structure and I want to perform a search and replace. The first hash is like the following: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"], 456 => ["as", "sd", "df"] }, mno => { 987 => ["lk", "dm", "sd"] }, } and I want to iteratively search for all '123'/'456' elements, and if a match is found, I need to do a comparison of the sublayer, i.e. of ['ab','cd','ef'] and ['as','sd','df'] and in this case, keep only the one with ['ab','cd','ef']. So the output will be as follows: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"] }, mno => { 987 => ["lk", "dm", "sd"] }, } So the deletion is based on the substructure, and not index. How can it be done? Thanks for the help!! Lets assume that I will declare the values to be kept, i.e. I will keep 456 = ["ab", "cd", "ef"] based on a predeclared value of ["ab", "cd", "ef"] and delete any other instance of 456 anywhere else. The search has to be for every key. so the code will go through the hash, first taking 123 = ["xx", "yy", "zy"] and compare it against itself throughout the rest of the hash, if no match is found, do nothing. If a match is found, like in the case of 456 = ["ab", "cd", "ef"], it will compare the two, and as I have said that in case of a match the one with ["ab", "cd", "ef"] would be kept, it will keep 456 = ["ab", "cd", "ef"] and discard any other instances of 456 anywhere else in the hash, i.e. it will delete 456 = ["as", "sd", "df"] in this case.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >