Search Results

Search found 529 results on 22 pages for 'quantum jumping'.

Page 17/22 | < Previous Page | 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Algorithm for finding symmetries of a tree

    - by Paxinum
    I have n sectors, enumerated 0 to n-1 counterclockwise. The boundaries between these sectors are infinite branches (n of them). The sectors live in the complex plane, and for n even, sector 0 and n/2 are bisected by the real axis, and the sectors are evenly spaced. These branches meet at certain points, called junctions. Each junction is adjacent to a subset of the sectors (at least 3 of them). Specifying the junctions, (in pre-fix order, lets say, starting from junction adjacent to sector 0 and 1), and the distance between the junctions, uniquely describes the tree. Now, given such a representation, how can I see if it is symmetric wrt the real axis? For example, n=6, the tree (0,1,5)(1,2,4,5)(2,3,4) have three junctions on the real line, so it is symmetric wrt the real axis. If the distances between (015) and (1245) is equal to distance from (1245) to (234), this is also symmetric wrt the imaginary axis. The tree (0,1,5)(1,2,5)(2,4,5)(2,3,4) have 4 junctions, and this is never symmetric wrt either imaginary or real axis, but it has 180 degrees rotation symmetry if the distance between the first two and the last two junctions in the representation are equal. Edit: This is actually for my research. I have posted the question at mathoverflow as well, but my days in competition programming tells me that this is more like an IOI task. Code in mathematica would be excellent, but java, python, or any other language readable by a human suffices. Here are some examples (pretend the double edges are single and we have a tree) http://www2.math.su.se/~per/files.php?file=contr_ex_1.pdf http://www2.math.su.se/~per/files.php?file=contr_ex_2.pdf http://www2.math.su.se/~per/files.php?file=contr_ex_5.pdf Example 1 is described as (0,1,4)(1,2,4)(2,3,4)(0,4,5) with distances (2,1,3). Example 2 is described as (0,1,4)(1,2,4)(2,3,4)(0,4,5) with distances (2,1,1). Example 5 is described as (0,1,4,5)(1,2,3,4) with distances (2). So, given the description/representation, I want to find some algorithm to decide if it is symmetric wrt real, imaginary, and rotation 180 degrees. The last example have 180 degree symmetry. (These symmetries corresponds to special kinds of potential in the Schroedinger equation, which has nice properties in quantum mechanics.)

    Read the article

  • Powershell $LastExitCode=0 but $?=False . Redirecting stderr to stdout gives NativeCommandError

    - by Colonel Panic
    Can anyone explain Powershell's surprising behaviour in the second example below? First, a example of sane behaviour: PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?" Hello from standard error $LastExitCode=0 and $?=True No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected. However, if I ask Powershell to redirect standard error to standard output over the first command, I get a NativeCommandError: PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" cmd.exe : Hello from standard error At line:1 char:4 + cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" + CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError $LastExitCode=0 and $?=False My first question, why the NativeCommandError ? Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? Powershell's docs about_Automatic_Variables don't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0 but my example contradicts that. Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one Powershell script from another. The inner script: cmd /c "echo Hello from standard error 1>&2" if (! $?) { echo "Job failed. Sending email.." exit 1 } # do something else Running this simply .\job.ps1, it works fine, no email is sent. However, I was calling it from another Powershell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! Here, the act of observing a phenomenon changes its outcome. This feels like quantum physics rather than scripting! [Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • Alert Dialog with custom layout failing

    - by cmptrer
    So this is related to a question I asked earlier. I am trying to display an alert using a specified layout. My layout is: And the code to call and show the alert dialog is: Context mContext = getApplicationContext(); AlertDialog.Builder builder = new AlertDialog.Builder(mContext); // use a custom View defined in xml View view = LayoutInflater.from(mContext).inflate(R.layout.sell_dialog, (ViewGroup) findViewById(R.id.layout_root)); builder.setView(view); builder.setPositiveButton(android.R.string.ok, new OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // do whatever you want with the input } }); AlertDialog alertDialog = builder.create(); alertDialog.show(); When I run it I get an error saying: Uncaught handler: thread main exiting due to uncaught exception android.view.WindowManager$NadTokenException: Unable to add window -- token null is not for an application I've looked through the android development site and can't figure it out. I think I'm just missing something obvious but the fix isn't jumping out at me. How can I get this alert dialog to display?

    Read the article

  • UIAlertViewDelegate method didDismissWithButtonIndex gets called while the phone is sleeping/locked.

    - by Rob
    I have a UIAlertView who's didDismissWithButtonIndex delegate method calls pops the view controller (same class, it's the alertview delegate and the viewcontroller) to return the user to the previous screen. The issue is that when you lock the phone before the [alert show]; is called, something is calling didDismissWithButtonIndex while the phone is locked. Since the response to that is to pop the view controller, which releases and deallocs it, I crash on the callback. What is causing this phantom button press? Seems like a framework bug, but I hate jumping to that conclusion. I'm definitely not hitting the button, because I hit a breakpoint in my code right before it's displayed. Then I lock the phone. Then I continue. I see it do the show, return to the event loop, and then, while the phone is still locked, hit my breakpoint in didDismissWithButtonIndex. There are a few internet/forum postings about similar spurious delegate calls, but no concrete answers. This is on the simulator, and the device, both OS 2.2 and OS 3.0. I'm assuming I'm missing something, but what? Update: Yeah, I created a simple project with just two view controllers, where when the 2nd view controller displays it creates the alert, and shows it. Then I NSLog in the delegate method, and when the phone is locked, it fires once while locked, and then again when it's unlocked and the button is clicked...2 log messages. But when not locked, there's only one. I guess I'll open an issue, but it seems awfully obvious to have survived this long without anyone complaining. :-) I'm going to try and work around it by making an isActive flag value when the willResignActive/didBecomeActive notifications arrive, and if the app isn't active skipping the delegate body. Update I went ahead in July after I posted this and created radar 7097363 for this issue. There's been no response. The workaround in practice works quite well, checking the active status when processing the delegate, and skipping the action if the the app is inactive.

    Read the article

  • UINavigationController placement problem

    - by Chonch
    Hey, I'm having this problem and I can't seem to figure out why! I want to use a UINavigationController, but I don't want to use it through out the entire application. So, in the UIViewController I was to add it to, I define: UINavigationController *navigationController; and the @property, @synthesize and dealloc. In the viewDidLoad method, I add: navigationController = [[UINavigationController alloc] init]; navigationController.navigationBar.barStyle = UIBarStyleBlack; and when the relevant UIViewController is loaded, I add: [navigationController pushViewController:viewController animated:NO]; and make the old view disappear and the UINavigationController's view to appear. Everything works fine except that the navigation bar appears about 20pixels too low (dragging the entire view down and out of the screen). If I set the UINavigationController's view frame parameter manually to start at 0,0, it jumps there, but only after about a second, so that you can see the navigation bar starting at a certain location and only then jumping to another (and it is ugly!). I have done the exact same thing in the past and had no problem with it. Does anyone know what may cause this? Thanks,

    Read the article

  • How important is managing memory in Objective-C?

    - by Alex Mcp
    Background: I'm (jumping on the bandwagon and) starting learning about iPhone/iPad development and Objective-C. I have a great background in web development and most of my programming is done in javascript (no libraries), Ruby, and PHP. Question: I'm learning about allocating and releasing memory in Objective-C, and I see it as quite a tricky task to layer on top of actually getting the farking thing to run. I'm trying to get a sense of applications that are out there and what will happen with a poorly memory-managed program. A) Are apps usually released with no memory leaks? Is this a feasible goal, or do people more realistically just excise the worst offenders and that's ok? B) If I make an NSString for a title of a view, let's say, and forget to deallocate it it, does this really only become a problem if I recreate that string repeatedly? I imagine what I'm doing is creating an overhead of the memory needed to store that string, so it's probably quite piddling (a few bytes?) However if I have a rapidly looping cycle in a game that 'leaks' an int every cycle or something, that would overflow the app quite quickly. Are these assumptions correct? Sorry if this isn't up the community-wiki alley, I'm just trying to get a handle on how to think about memory and how careful I'll need to be. Any anecdotes or App Store-submitted app experiences would be awesome to hear as well.

    Read the article

  • What's with the love of dynamic Languages

    - by Kibbee
    It seems that everybody is jumping on the dynamic, non-compiled bandwagon lately. I've mostly only worked in compiled, static typed languages (C, Java, .Net). The experience I have with dynamic languages is stuff like ASP (Vb Script), JavaScript, and PHP. Using these technologies has left a bad taste in my mouth when thinking about dynamic languages. Things that usually would have been caught by the compiler such as misspelled variable names and assigning an value of the wrong type to a variable don't occur until runtime. And even then, you may not notice an error, as it just creates a new variable, and assigns some default value. I've also never seen intellisense work well in a dynamic language, since, well, variables don't have any explicit type. What I want to know is, what people find so appealing about dynamic languages? What are the main advantages in terms of things that dynamic languages allow you to do that can't be done, or are difficult to do in compiled languages. It seems to me that we decided a long time ago, that things like uncompiled asp pages throwing runtime exceptions was a bad idea. Why is there is a resurgence of this type of code? And why does it seem to me at least, that Ruby on Rails doesn't really look like anything you couldn't have done with ASP 10 years ago?

    Read the article

  • Is android's motion event handling accurate??

    - by Peterdk
    Bug I have a weird bug in my piano app. Sometimes keys (and thus notes) hang. I did a lot of debugging and narrowed it down to what looks like androids inaccuracy of motion event handling: DEBUG/(2091): ACTION_DOWN A4 DEBUG/(2091): KeyDown: A4 DEBUG/(2091): ACTION_MOVE A4 => A4 DEBUG/(2091): ACTION_MOVE ignoring DEBUG/(2091): ACTION_MOVE A4 => A4 DEBUG/(2091): ACTION_MOVE ignoring DEBUG/(2091): ACTION_MOVE A4 => A4 DEBUG/(2091): ACTION_MOVE ignoring DEBUG/(2091): ACTION_UP B4 //HOW CAN THIS BE???? DEBUG/(2091): KeyUp: B4 DEBUG/(2091): Stream is null, can't stop DEBUG/(2091): Hanging Note: A4 X=240-287 EventX=292 Y=117-200 EventY=164 DEBUG/(2091): KeyUp Note: B4 X=288-335 EventX=292 Y=117-200 EventY=164 Clearly it can be seen here that out of nowhere I suddenly have an ACTION_UP for another note. Shouldn't I definitely get a ACTION_MOVE first? As shown in the end of the log, it's definitely not an error in region detection, since the ACTION_UP event is clearly in the B4 region. Logging Implementation details Every onTouchEvent() call is logged, so the log is accurate. The relevant pseudo-code for the ACTION_MOVE logging is: Key oldKey = Key.get(event.getHistoricalX(), event.getHistoricalY()); Key newKey = Key.get(event.getX(), event.getY()); Question Is this normal behaviour for Android (the jumping in coordinates)? Am I missing something?

    Read the article

  • Is jQuery always the answer?

    - by Kibbee
    I've come across a couple questions, such as this one, and I really have to wonder why "Use jQuery" seems to be the answer when somebody asks how to do something in JavaScript. I understand that jQuery can save you a lot of time, and can help you out a lot, especially when you are doing a lot of fancy JavaScript in your site. However, in instances like this, and in many other instances, it seems like it's just jumping around the problem instead of answering the question. I also feel like this builds too much dependency into libraries. I've seen way too many developers that simply rely too much on libraries, and if they encounter a situation where they didn't have the library, they would be completely unable to function. I feel like there are already enough developers who don't know JavaScript, without just telling everybody to not learn JavaScript, and use jQuery. So, just to reiterate the question. Do you think there's too much of a tendency to use jQuery, for small pieces of JavaScript, when most of the functionality of jQuery isn't being used. Should developers be fluent in the use of bare JavaScript so they don't get too dependent on using libraries? [Additional related conversation topic] Does the existence of jQuery give too much slack to web browser developers who write the JavaScript engines? If we just have workarounds to cover all the inconsistencies in JavaScript, what pressure is there on browser makers to ensure that their JavaScript library works as it should. I feel like this extrapolates the same problem discussed in SO Podcast #36 of "be conservative in what you send, liberal in what you accept". By being so liberal with bad JavaScript engines, and using a common library to work around the flaws, we are promoting their use, and extending the problem.

    Read the article

  • php $_FILES error code returning an unexpected error

    - by EmmyS
    I have some code that allows users to upload multiple files at once. It was never getting to a specific point after the upload, so I put in an echo to test for the value of the error code, and it's returning a value that I'm not sure I understand. Here's the code: $tmpTarget = PCBUG_UPLOADPATH; foreach ($_FILES["attachments"]["error"] as $key => $error) { if ($error == UPLOAD_ERR_OK) { $tmp_name = $_FILES["attachments"]["tmp_name"][$key]; $name = str_replace(" ", "_", $_FILES["attachments"]["name"][$key]); move_uploaded_file($tmp_name, "$tmpTarget/$name"); @unlink($_FILES["attachments"]["tmp_name"][$key]); } else { $errorFlag = true; echo "error = $error"; exit; } } The code that creates the attachments field looks like this: for($i=1; $i<=$max_no_img; $i++){ echo "<input type=file name='attachments[]' class='bginput'><br />"; } where $max_no_img is a variable set further up in the code, and PCBUG_UPLOAD path is a constant defined in an included file. Here's what's confusing: after I submit my form, I go and look in my uploads directory, and the files I've selected through the form are there - they uploaded correctly. However, the code is jumping into the else clause and $error is returning 4, which the php manual indicates means that no file was uploaded. Any ideas? The files very clearly are getting where they're supposed to. Is there some other definition of "uploaded" that isn't happening?

    Read the article

  • Fork in Perl not working inside a while loop reading from file

    - by Sag
    Hi, I'm running a while loop reading each line in a file, and then fork processes with the data of the line to a child. After N lines I want to wait for the child processes to end and continue with the next N lines, etc. It looks something like this: while ($w=<INP>) { # ignore file header if ($w=~m/^\D/) { next;} # get data from line chomp $w; @ws = split(/\s/,$w); $m = int($ws[0]); $d = int($ws[1]); $h = int($ws[2]); # only for some days in the year if (($m==3)and($d==15) or ($m==4)and($d==21) or ($m==7)and($d==18)) { die "could not fork" unless defined (my $pid = fork); unless ($pid) { some instructions here using $m, $d, $h ... } push @qpid,$pid; # when all processors are busy, wait for child processes if ($#qpid==($procs-1)) { for my $pid (@qpid) { waitpid $pid, 0; } reset 'q'; } } } close INP; This is not working. After the first round of processes I get some PID equal to 0, the @qpid array gets mixed up, and the file starts to get read at (apparently) random places, jumping back and forth. The end result is that most lines in the file get read two or three times. Any ideas? Thanks a lot in advance, S.

    Read the article

  • Flash - Uploading to and Downloading from localhost

    - by Md Derf
    I have an online flash application that acts as a front end for a server application built in delphi. The server can be installed/used on a remote computer or a personal version can be downloaded and the Flash app pointed at localhost to use it. However, Flash has issues with using the POST and GET functions on the localhost, which makes uploading data files and downloading results files difficult. To get past the difficulty with downloading results files I'm planning to just have the server app serve the results file as an attachment and have the Flash app open the address of the file up in another browser window using external interface. First off, is this likely to cause similar security issues? I.E. Flash will see "localhost" in the external interface call and stop it from working the same as when I try to use POST/GET functions with localhost? Secondly, for upload this seems just a bit little trickier, I'm planning on doing something similar, having flash use external interface to open a php script for a file upload. Is this feasible and, again, will Flash still have security issues? Lastly, if anyone knows how to get flash to execute POST and GET functions with localhost addresses, I'd love to have that information to avoid all this jumping through hoops.

    Read the article

  • Is MVVM killing silverlight development?

    - by DeanMc
    This is a question I have had rattling around in my head for some time. I had a chat with a guy the other night who told me he would not be using the navigational framework because he could not figure out how it works with MVVM. As much as I tried to explain that patterns should be taken with a pinch of salt he would not listen. My point is this, patterns are great when they solve some problem. Sometimes only part of the pattern solves a particular problem while the other parts of it cause different problems. The goal of any developer is to build a solid application using a combination of patterns know how and foresight. I feel MVVM is becoming the one pattern to rule them all. As it is not directly supported by .Net some fancy business is needed to make it work. I feel that people are missing the point of the pattern, which is loosely coupled, testable code and instead jumping through hoops and missing out on great experiences trying to follow MVVM to the letter. MVVM is great but I wish it came with a warning or disclaimer for newbies as my fear is people will shy away from silverlight development for fear of being smacked with the mvvm stick. EDIT: Can I just add as an edit, I use and agree with MVVM as a pattern I know when it is and isn't feasible in my projects. My issue is with the encompassing nature it is taking, as if it HAS to be used as part of development. It is being used as an integral feature and not a pattern, which it is.

    Read the article

  • jquery-infinite-carousel - jump 3 items instead of one

    - by Rob B
    Successfully used jquery-infinite-carousel in the past but for my current project I need it to loop forever AND also jump 3 items at a time. Example of how it works jumping 1 at a time here. jQuery.fn.carousel = function(previous, next, options){ var sliderList = jQuery(this).children()[0]; if (sliderList) { var increment = jQuery(sliderList).children().outerWidth("true"), elmnts = jQuery(sliderList).children(), numElmts = elmnts.length, sizeFirstElmnt = increment, shownInViewport = Math.round(jQuery(this).width() / sizeFirstElmnt), firstElementOnViewPort = 1, isAnimating = false; //console.log("increment = " + increment); //console.log("numElmts = " + numElmts); //console.log("shownInViewport = " + shownInViewport); for (i = 0; i < shownInViewport; i++) { jQuery(sliderList).css('width',(numElmts+shownInViewport)*increment + increment + "px"); jQuery(sliderList).append(jQuery(elmnts[i]).clone()); } jQuery(previous).click(function(event){ if (!isAnimating) { if (firstElementOnViewPort == 1) { jQuery(sliderList).css('left', "-" + numElmts * sizeFirstElmnt + "px"); firstElementOnViewPort = numElmts; console.log("firstElementOnViewPort = " + firstElementOnViewPort); } else { firstElementOnViewPort--; } jQuery(sliderList).animate({ left: "+=" + increment, y: 0, queue: true }, "swing", function(){isAnimating = false;}); isAnimating = true; } }); jQuery(next).click(function(event){ if (!isAnimating) { if (firstElementOnViewPort > numElmts) { firstElementOnViewPort = 2; jQuery(sliderList).css('left', "0px"); } else { firstElementOnViewPort++; } jQuery(sliderList).animate({ left: "-=" + (increment), y: 0, queue: true }, "swing", function(){isAnimating = false;}); isAnimating = true; } }); } };

    Read the article

  • Add delay and focus for a dead simple accordion jquery

    - by kuswantin
    I found the code somewhere in the world, and due to its dead simple nature, I found some things in need to fix, like when hovering the trigger, the content is jumping up down before it settled. I have tried to change the trigger from the title of the accordion block to the whole accordion block to no use. I need help to add a delay and only do the accordion when the cursor is focus in the block. I have corrected my css to make sure the block is covering the hidden part as well when toggled. Here is my latest modified code: var acTrig = ".accordion .title"; $(acTrig).hover(function() { $(".accordion .content").slideUp("normal"); $(this).next(".content").slideDown("normal"); }); $(acTrig).next(".content").hide(); $(acTrig).next(".content").eq(0).css('display', 'block'); This is the HTML: <div class="accordion clearfix"> <div class="title">some Title 1</div> <div class="content"> some content blah </div> </div> <div class="accordion clearfix"> <div class="title">some Title 2</div> <div class="content"> some content blah </div> </div> <div class="accordion clearfix"> <div class="title">some Title 3</div> <div class="content"> some content blah </div> </div> I think I need a way to stop event bubbling somewhere around the code, but can't get it right. Any help would be very much appreciated. Thanks as always.

    Read the article

  • How to exit current activity to homescreen (without using "Home" button)?

    - by steff
    Hi everyone, I am sure this will have been answered but I proved unable to find it. So please excuse my redundancy. What I am trying to do is emulating the "Home" button which takes one back to Android's homescreen. So here is what causes me problems: I have 3 launcher activities. The first one (which is connected to the homescreen icon) is just a (password protected) configuration activity. It will not be used by the user (just admin) One of the other 2 (both accessed via an app widget) is a questionnaire app. I'm allowing to jump back between questions via the Back button or a GUI back button as well. When the questionnaire is finished I sum up the answers given and provide a "Finish" button which should take the user back to the home screen. For the questionnaire app I use a single activity (called ItemActivity) which calls itself (is that recursion as well when using intents?) to jump from one question to another: Questionnaire.serializeToXML(); Intent i = new Intent().setClass(c, ItemActivity.class); if(Questionnaire.instance.getCurrentItemNo() == Questionnaire.instance.getAmountOfItems()) { Questionnaire.instance.setCompleted(true); } else Questionnaire.instance.nextItem(); startActivity(i); The final screen shows something like "Thank you for participating" as well as the formerly described button which should take one back to the homescreen. But I don't really get how to exit the Activity properly. I've e.g. used this.finish(); but this strangely brings up the "Thank you" screen again. So how can I just exit by jumping back to the homescreen?? Sorry for the inconvinience. Regards, Steff

    Read the article

  • lots of backbone views - performance issues?

    - by ksol
    tl;dr: I wonder if having lots (100+ for the moment, potentially up to 1000/2000 or more) of backbone views (as a cell of a table) is too heavy or not The project I'm working on revolves around a planning view. There one row per user that covers 6 hours of a day, each hour splitted in 4 slots of 15mn. This planning is used to add "reservations" when clicking on a slot, and should handle hovering of the correct slots, and also handle when it is NOT possible to make a reservation - ie. prevent user click on an "unavailable" slot. There is many reasons why a slot can't be clicked on: the user is not available at this time, or the user is in a reservation; or the app needs to "force" a delay slot between two reservations. Reservations (a div) are rendered in a slot (a cell of a table), and by toying with dimensions, hovers the right number of slots. All this screen is handled with backbone. So For each slot I'm hovering on, I need to check wether I can do a reservation here or not. As of this moment, I use this by toying with the data attributes on the slots : when a reservation object is added, the slots covered are "enhanced with (among others) the reservation object (the backbone view object). But in some cases I don't quite have a grasp on now, it mixes up, and when the reservation view is removed, the slots are not "cleaned up" : the previous class is not reset correctly. It is probably something I've done wrong or badly, but this is only going to get heavier; I think I should use another class of Backbone views here, but I'm afraid the number of slots and thereof of views objects will be high and cause performance issue. I don't know mush about js perf so I'd like to have some feedback before jumping on that train. Any other advice on how to do this would be quite welcomed too. Thanks for your time. If this is not clear enough, tell me, I'll try and rephrase it.

    Read the article

  • Problem with setjmp/longjmp

    - by user294732
    The code below is just not working. Can anybody point out why #define STACK_SIZE 1524 static void mt_allocate_stack(struct thread_struct *mythrd) { unsigned int sp = 0; void *stck; stck = (void *)malloc(STACK_SIZE); sp = (unsigned int)&((stck)); sp = sp + STACK_SIZE; while((sp % 8) != 0) sp--; #ifdef linux (mythrd->saved_state[0]).__jmpbuf[JB_BP] = (int)sp; (mythrd->saved_state[0]).__jmpbuf[JB_SP] = (int)sp-500; #endif } void mt_sched() { fprintf(stdout,"\n Inside the mt_sched"); fflush(stdout); if ( current_thread->state == NEW ) { if ( setjmp(current_thread->saved_state) == 0 ) { mt_allocate_stack(current_thread); fprintf(stdout,"\n Jumping to thread = %u",current_thread->thread_id); fflush(stdout); longjmp(current_thread->saved_state, 2); } else { new_fns(); } } } All I am trying to do is to run the new_fns() on a new stack. But is is showing segmentation fault at new_fns(). Can anybody point me out what's wrong.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Android Compass orientation on unreliable (Low pass filter)

    - by madsleejensen
    Hi all Im creating an application where i need to position a ImageView depending on the Orientation of the device. I use the values from a MagneticField and Accelerometer Sensors to calculate the device orientation with SensorManager.getRotationMatrix(rotationMatrix, null, accelerometerValues, magneticFieldValues) SensorManager.getOrientation(rotationMatrix, values); double degrees = Math.toDegrees(values[0]); My problem is that the positioning of the ImageView is very sensitive to changes in the orientation. Making the imageview constantly jumping around the screen. (because the degrees change) I read that this can be because my device is close to things that can affect the magneticfield readings. But this is not the only reason it seems. I tried downloading some applications and found that the "3D compass" application remains extremely steady in its readings, i would like the same behavior in my application. I read that i can tweak the "noise" of my readings by adding a "Low pass filter", but i have no idea how to implement this (because of my lack of Math). Im hoping someone can help me creating a more steady reading on my device, Where a little movement to the device wont affect the current orientation. Right now i do a small if (Math.abs(lastReadingDegrees - newReadingDegrees) > 1) { updatePosition() } To filter abit of the noise. But its not working very well :)

    Read the article

  • iPhone Apps, Objective C, a Graph, and Functions vs. Objects (or something like that)

    - by BillEccles
    Gentleones, I've got a UIImageView that I have in a UIView. This particular UIImageView displays a PNG of a graph. It's important for this particular view controller and another view controller to know what the graph "looks like," i.e., the function described by the PNG graph. These other view controllers need to be able to call a "function" of some sort with the X value returning the Y value. Please ignore the fact (from an outsider's perspective) that I could probably generate the graph programmatically. So my questions are: 1 - Where should I put this function? Should it go in one view controller or the other, or should it go in its own class or main or...? 2 - Should it be an object in some way? Or should it be a straight function? (In either case, where's it belong?) Pardon the apparent n00b-iness of the question. It's 'cause I'm honestly a n00b! This OOP stuff is giving my head a spin having been a procedural programmer for 30 years. (And I choose to learn it by jumping into an iPhone app. Talk about baptism by fire!) Thanks in advance,Bill

    Read the article

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22  | Next Page >