Search Results

Search found 3596 results on 144 pages for 'division by zero'.

Page 63/144 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Sliding collision response

    - by dbostream
    I have been reading plenty of tutorials about sliding collision responses yet I am not able to implement it properly in my project. What I want to do is make a puck slide along the rounded corner boards of a hockey rink. In my latest attempt the puck does slide along the boards but there are some strange velocity behaviors. First of all the puck slows down a lot pretty much right away and then it slides for awhile and stops before exiting the corner. Even if I double the speed I get a similar behavior and the puck does not make it out of the corner. I used some ideas from this document http://www.peroxide.dk/papers/collision/collision.pdf. This is what I have: Update method called from the game loop when it is time to update the puck (I removed some irrelevant parts). I use two states (current, previous) which are used to interpolate the position during rendering. public override void Update(double fixedTimeStep) { /* Acceleration is set to 0 for now. */ Acceleration.Zero(); PreviousState = CurrentState; _collisionRecursionDepth = 0; CurrentState.Position = SlidingCollision(CurrentState.Position, CurrentState.Velocity * fixedTimeStep + 0.5 * Acceleration * fixedTimeStep * fixedTimeStep); /* Should not this be affected by a sliding collision? and not only the position. */ CurrentState.Velocity = CurrentState.Velocity + Acceleration * fixedTimeStep; Heading = Vector2.NormalizeRet(CurrentState.Velocity); } private Vector2 SlidingCollision(Vector2 position, Vector2 velocity) { if(_collisionRecursionDepth > 5) return position; bool collisionFound = false; Vector2 futurePosition = position + velocity; Vector2 intersectionPoint = new Vector2(); Vector2 intersectionPointNormal = new Vector2(); /* I did not include the collision detection code, if a collision is detected the intersection point and normal in that point is returned. */ if(!collisionFound) return futurePosition; /* If no collision was detected it is safe to move to the future position. */ /* It is not exactly the intersection point, but slightly before. */ Vector2 newPosition = intersectionPoint; /* oldVelocity is set to the distance from the newPosition(intersection point) to the position it had moved to had it not collided. */ Vector2 oldVelocity = futurePosition - newPosition; /* Project the distance left to move along the intersection normal. */ Vector2 newVelocity = oldVelocity - intersectionPointNormal * oldVelocity.DotProduct(intersectionPointNormal); if(newVelocity.LengthSq() < 0.001) return newPosition; /* If almost no speed, no need to continue. */ _collisionRecursionDepth++; return SlidingCollision(newPosition, newVelocity); } What am I doing wrong with the velocity? I have been staring at this for very long so I have gone blind. I have tried different values of recursion depth but it does not seem to make it better. Let me know if you need more information. I appreciate any help. EDIT: A combination of Patrick Hughes' and teodron's answers solved the velocity problem (I think), thanks a lot! This is the new code: I decided to use a separate recursion method now too since I don't want to recalculate the acceleration in each recursion. public override void Update(double fixedTimeStep) { Acceleration.Zero();// = CalculateAcceleration(fixedTimeStep); PreviousState = new MovingEntityState(CurrentState.Position, CurrentState.Velocity); CurrentState = SlidingCollision(CurrentState, fixedTimeStep); Heading = Vector2.NormalizeRet(CurrentState.Velocity); } private MovingEntityState SlidingCollision(MovingEntityState state, double timeStep) { bool collisionFound = false; /* Calculate the next position given no detected collision. */ Vector2 futurePosition = state.Position + state.Velocity * timeStep; Vector2 intersectionPoint = new Vector2(); Vector2 intersectionPointNormal = new Vector2(); /* I did not include the collision detection code, if a collision is detected the intersection point and normal in that point is returned. */ /* If no collision was detected it is safe to move to the future position. */ if (!collisionFound) return new MovingEntityState(futurePosition, state.Velocity); /* Set new position to the intersection point (slightly before). */ Vector2 newPosition = intersectionPoint; /* Project the new velocity along the intersection normal. */ Vector2 newVelocity = state.Velocity - 1.90 * intersectionPointNormal * state.Velocity.DotProduct(intersectionPointNormal); /* Calculate the time of collision. */ double timeOfCollision = Math.Sqrt((newPosition - state.Position).LengthSq() / (futurePosition - state.Position).LengthSq()); /* Calculate new time step, remaining time of full step after the collision * current time step. */ double newTimeStep = timeStep * (1 - timeOfCollision); return SlidingCollision(new MovingEntityState(newPosition, newVelocity), newTimeStep); } Even though the code above seems to slide the puck correctly please have a look at it. I have a few questions, if I don't multiply by 1.90 in the newVelocity calculation it doesn't work (I get a stack overflow when the puck enters the corner because the timeStep decreases very slowly - a collision is found early in every recursion), why is that? what does 1.90 really do and why 1.90? Also I have a new problem, the puck does not move parallell to the short side after exiting the curve; to be more exact it moves outside the rink (I am not checking for any collisions with the short side at the moment). When I perform the collision detection I first check that the puck is in the correct quadrant. For example bottom-right corner is quadrant four i.e. circleCenter.X < puck.X && circleCenter.Y puck.Y is this a problem? or should the short side of the rink be the one to make the puck go parallell to it and not the last collision in the corner? EDIT2: This is the code I use for collision detection, maybe it has something to do with the fact that I can't make the puck slide (-1.0) but only reflect (-2.0): /* Point is the current position (not the predicted one) and quadrant is 4 for the bottom-right corner for example. */ if (GeometryHelper.PointInCircleQuadrant(circleCenter, circleRadius, state.Position, quadrant)) { /* The line is: from = state.Position, to = futurePosition. So a collision is detected when from is inside the circle and to is outside. */ if (GeometryHelper.LineCircleIntersection2d(state.Position, futurePosition, circleCenter, circleRadius, intersectionPoint, quadrant)) { collisionFound = true; /* Set the intersection point to slightly before the real intersection point (I read somewhere this was good to do because of floting point precision, not sure exactly how much though). */ intersectionPoint = intersectionPoint - Vector2.NormalizeRet(state.Velocity) * 0.001; /* Normal at the intersection point. */ intersectionPointNormal = Vector2.NormalizeRet(circleCenter - intersectionPoint) } } When I set the intersection point, if I for example use 0.1 instead of 0.001 the puck travels further before it gets stuck, but for all values I have tried (including 0 - the real intersection point) it gets stuck somewhere (but I necessarily not get a stack overflow). Can something in this part be the cause of my problem? I can see why I get the stack overflow when using -1.0 when calculating the new velocity vector; but not how to solve it. I traced the time steps used in the recursion (initial time step is always 1/60 ~ 0.01666): Recursion depth Time step next recursive call [Start recursion, time step ~ 0.016666] 0 0,000985806527246773 [No collision, stop recursion] [Start recursion, time step ~ 0.016666] 0 0,0149596704364629 1 0,0144883449376379 2 0,0143155612984837 3 0,014224925727213 4 0,0141673917461608 5 0,0141265435314026 6 0,0140953966184117 7 0,0140704653746625 ...and so on. As you can see the collision is detected early in every recursive call which means the next time step decreases very slowly thus the recursion depth gets very big - stack overflow.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Should developers know how to use office suites?

    - by systempuntoout
    How deep is your knowledge on Office suites? Personally i don't like them, i hate create and manage word documents, excel datasheets etc. etc. I'm not talking about opening a word document and write some text or calculate sum and division on excel; i'm talking about advanced features like revisions, vba macros and so on. I have a co-worker, actually he's a talented functional analyst, that don't know anything about programming but he's kind a monster guru on Microsoft Office suite. When he sits on my desk and asks me to open and modify some of his hardly complicated Microsoft Excel multicolor multipivotal recursive datasheet, ehm, i feel like a baby in front of a nuclear plant console.It' not a great feeling if you know what i mean. As programmer, do you feel guilty about not knowing office suites enough?

    Read the article

  • SQL SELECT: "Give me all documents where all of the documents procedures are 'work in progress'"

    - by prestonmarshall
    This one really has me stumped. I have a documents table which hold info about the documents, and a procedures table, which is kind of like a revisions table for each document. What I need to do is write a select statement which gives me all of the documents where all of the procedures have the status "work_in_progress". Here's an example procedures table: document_id | status 1 | 'wip' 1 | 'wip' 1 | 'wip' 1 | 'approved' 2 | 'wip' 2 | 'wip' 2 | 'wip' Here, I would want my query to only return document id 2, because all of its statuses are work_in_progress. I DO NOT want document_id 1 since one of its statuses is 'approved'. I believe this is relational division I want, but I'm not sure where to start. This is MySQL 5.0 FYI.

    Read the article

  • Explanation of casting/conversion int/double in C#

    - by cad
    I coded some calculation stuff (I copied below a really simplifed example of what I did) like CASE2 and got bad results. Refactored the code like CASE1 and worked fine. I know there is an implicit cast in CASE 2, but not sure of the full reason. Any one could explain me what´s exactly happening below? //CASE 1, result 5.5 double auxMedia = (5 + 6); auxMedia = auxMedia / 2; //CASE 2, result 5.0 double auxMedia1 = (5 + 6) / 2; //CASE 3, result 5.5 double auxMedia3 = (5.0 + 6.0) / 2.0; //CASE 4, result 5.5 double auxMedia4 = (5 + 6) / 2.0; My guess is that /2 in CASE2 is casting (5 + 6) to int and causing round of division to 5, then casted again to double and converted to 5.0. CASE3 and CASE 4 also fixes the problem.

    Read the article

  • Seemingly useless debugging environment for Android

    - by mikeY
    I've just started debugging my first three line long android app and I can't seem to use the debug tool like I want to. Here's my code: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); int a = 1 / 0; } Now I expect the debugger to halt the thread and show me the line number of statement where the division by zero occurs. No, instead it shows some other method internal to the system for which I have no source. To make the matters worse, there is no exception message either. Prior to this app, I created one which would do something when a button was pressed. If any exception was raised, again no useful line number or exception message would be shown. As of right now, there is no way to debug my app. Any ideas? I'm using the latest SDK along with Eclipse ADT plugin and debugging on a real device (Nexus One).

    Read the article

  • Tricky string transformation (hopefully) in LINQ

    - by Larsenal
    I'm hoping for a concise way to perform the following transformation. I want to transform song lyrics. The input will look something like this: Verse 1 lyrics line 1 Verse 1 lyrics line 2 Verse 1 lyrics line 3 Verse 1 lyrics line 4 Verse 2 lyrics line 1 Verse 2 lyrics line 2 Verse 2 lyrics line 3 Verse 2 lyrics line 4 And I want to transform them so the first line of each verse is grouped together as in: Verse 1 lyrics line 1 Verse 2 lyrics line 1 Verse 1 lyrics line 2 Verse 2 lyrics line 2 Verse 1 lyrics line 3 Verse 2 lyrics line 3 Verse 1 lyrics line 4 Verse 2 lyrics line 4 Lyrics will obviously be unknown, but the blank line marks a division between verses in the input.

    Read the article

  • PHP - Math - Round Number Function

    - by aSeptik
    Hi All guys! this time i have a math question for you! Assuming we have three numbers one is the Number of Votes the second is the Total Values and the last is the Units Ratings. units_ratings can be a number from 1 to 10; if ( total_values / units_ratings != number_of_votes ) { //do something for let the "number_of_votes" fit the division! } I ask this to you, cause i want know the best (fastest) way of achieve this! Thanks!

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Storing DOM reference elements in a Javascript array

    - by webzide
    Dear experts, I was trying to dynamically generate DOM elements using JS. I read from Douglas Crockford's book that DOM is very very poorly structured. Anyways, I would like to create a number of DIVISION elements and store the reference into an array so it could be accessed later. Here's the code for(i=0;i<3;i++){ var div=document.body.appendChild(document.createElement("div")); var arr=new Array(); arr.push(div); } Somehow this would not work..... There is only 1 div element created. When I use the arr.length to test the code there is only 1 element in the array. Is there another way to accomplish this? Thanks in advance

    Read the article

  • assembly language programming (prime number)

    - by chris
    Prompt the user for a positive three digit number, then read it. Let's call it N. Divide into N all integer values from 2 to (N/2)+1 and test to see if the division was even, in which case N is instantly shown to be non-prime. Output a message printing N and saying that it is not prime. If none of those integer values divide evenly (remainder never is zero), then N is shown to be prime. Output a message printing N and saying that it is prime. Ask the user if he or she wants to test another number; if the user types "n" or "N", quit. If "y" or "Y", jump back and repeat. Comments in your code are essential. Hi. I am kinda in rush to do this.. please help me doing it. I'll be much appreciated. thank you

    Read the article

  • getting detailed information about structured exceptions

    - by Martin
    My Visual C++ application is compiled with /EHA option, letting me catch structured exceptions (division by zero, access violation, etc). I then translate those exceptions to my own exception class using _set_se_translator(). My goal is to improve our logging of those types of exceptions. I can get the type of exception from the EXCEPTION_RECORD structure, and the exception address. I would like to be able to gather more information, like the source file/location where the exception is thrown, the call stack, etc. Is that possible? I do create an exception minidump on structured exceptions - is there a tool to automatically get the call stack from that?

    Read the article

  • Finding the intersection of two vector equations.

    - by Matthew Mitchell
    I've been trying to solve this and I found an equation that gives the possibility of zero division errors. Not the best thing: v1 = (a,b) v2 = (c,d) d1 = (e,f) d2 = (h,i) l1: v1 + ?d1 l2: v2 + µd2 Equation to find vector intersection of l1 and l2 programatically by re-arranging for lambda. (a,b) + ?(e,f) = (c,d) + µ(h,i) a + ?e = c + µh b +?f = d + µi µh = a + ?e - c µi = b +?f - d µ = (a + ?e - c)/h µ = (b +?f - d)/i (a + ?e - c)/h = (b +?f - d)/i a/h + ?e/h - c/h = b/i +?f/i - d/i ?e/h - ?f/i = (b/i - d/i) - (a/h - c/h) ?(e/h - f/i) = (b - d)/i - (a - c)/h ? = ((b - d)/i - (a - c)/h)/(e/h - f/i) Intersection vector = (a + ?e,b + ?f) Not sure if it would even work in some cases. I haven't tested it. I need to know how to do this for values as in that example a-i. Thank you.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Upon doing a XSL Transform to XML how do I remove white space from a Node's attribute or data?

    - by Randy
    I have a part's list built out in XML and each part is labeled as such: <division> <parts> <part number="123456 " drawing="123456 " cad="y"> <attribute> <header>Header</header> <list>2</list> </attribute> </part> And I need to get the data behind the number and drawing attributes without the white space. I tried xsl:strip-space on the specific elements, and across the board, but that only strips the content in between the tags. I unfortunately have no access to the back-end that's producing the XML, so removing the spaces there doesn't look like an option.

    Read the article

  • How do I use modulus for float/double?

    - by ShrimpCrackers
    I'm creating an RPN calculator for a school project. I'm having trouble with the modulus operator. Since we're using the double data type, modulus won't work on floating point numbers. For example, 0.5 % 0.3 should return 0.2 but I'm getting a division by zero exception. The instruction says to use fmod(). I've looked everywhere for fmod(), including javadocs but I can't find it. I'm starting to think it's a method I'm going to have to create? edit: hmm, strange. I just plugged in those numbers again and it seems to be working fine...but just in case. Do I need to watch out using the mod operator in Java when using floating types? I know something like this can't be done in C++ (I think).

    Read the article

  • Function arguments VBA

    - by user1068249
    I have these three functions: When I run the first 2 functions, There's no problem, but when I run the last function (LMTD), It says 'Division by zero' yet when I debug some of the arguments have values, some don't. I know what I have to do, but I want to know why I have to do it, because it makes no sense to me. Tinn-function doesn't have Tut's arguments, so I have to add them to Tinn-function's arguments. Same goes for Tut, that doesn't know all of Tinn's arguments, and LMTD has to have both of Tinn and Tut's arguments. If I do that, it all runs smoothly. Why do I have to do this? Public Function Tinn(Tw, Qw, Qp, Q, deltaT) Tinn = (((Tw * Qw) + (Tut(Q, fd, mix) * Q)) / Qp) + deltaT End Function Public Function Tut(Q, fd, mix) Tut = Tinn(Tw, Qw, Qp, Q, deltaT) - (avgittEffektAiUiLMTD() / ((Q * fd * mix) / 3600)) End Function Public Function LMTD(Tsjo) LMTD = ((Tinn(Tw, Qw, Qp, Q, deltaT) - Tsjo) - (Tut(Q, fd, mix) - Tsjo)) / (WorksheetFunction.Ln ((Tinn(Tw, Qw, Qp, Q, deltaT) - Tsjo) / (Tut(Q, fd, mix) - Tsjo))) End Function

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Python Beginner: Selective Printing in loops

    - by Jonathan Straus
    Hi there. I'm a very new python user (had only a little prior experience with html/javascript as far as programming goes), and was trying to find some ways to output only intermittent numbers in my loop for a basic bicycle racing simulation (10,000 lines of biker positions would be pretty excessive :P). I tried in this loop several 'reasonable' ways to communicate a condition where a floating point number equals its integer floor (int, floor division) to print out every 100 iterations or so: for i in range (0,10000): i = i + 1 t = t + t_step #t is initialized at 0 while t_step is set at .01 acceleration_rider1 = (power_rider1 / (70 * velocity_rider1)) - (force_drag1 / 70) velocity_rider1 = velocity_rider1 + (acceleration_rider1 * t_step) position_rider1 = position_rider1 + (velocity_rider1 * t_step) force_drag1 = area_rider1 * (velocity_rider1 ** 2) acceleration_rider2 = (power_rider2 / (70 * velocity_rider1)) - (force_drag2 / 70) velocity_rider2 = velocity_rider2 + (acceleration_rider2 * t_step) position_rider2 = position_rider2 + (velocity_rider2 * t_step) force_drag2 = area_rider1 * (velocity_rider2 ** 2) if t == int(t): #TRIED t == t // 1 AND OTHER VARIANTS THAT DON'T WORK HERE:( print t, "biker 1", position_rider1, "m", "\t", "biker 2", position_rider2, "m"

    Read the article

  • How should I build a privacy drop-down (select) menu?

    - by animuson
    I'm trying to build something similar to Facebook's privacy selection menu, except without the 'custom' option. It will only list a few options such as 'show to all', 'show to friends only', or 'completely hidden'. Right now I'm thinking of using simple JavaScript to change a hidden input field to the new value they click on, so if they clicked on the division for 'show to friends only' it would change the corresponding field, say 'email_privacy', to 1. Is there a better way to do this or am I pretty much on track? P.S. I am not planning on using a select element, I was planning on building a custom drop-down menu using CSS since select elements are so highly non-customizable. I'm doing it this way to save space, rather than having this massive selection menu at the right which takes up a bunch of space. Note: I'm not really interested in using jQuery, that's just extra libraries and crap that I don't want to load. I can do it in JavaScript just as easily so I might as well use that.

    Read the article

  • Is there a list of language only character regions for UTF-8 somewhere?

    - by Brehtt
    I'm trying to analyze some UTF-8 encoded documents in a way that recognizes different language characters. For my approach to work I need to ignore non-language characters, such as control characters, mathematical symbols etc. Just trying to dissect the basic Latin section of the UTF standard has resulted in multiple regions, with characters like the division symbol being right in the middle of a range of valid Latin characters. Is there a list somewhere that identifies these regions? Or better yet, a Regex that defines the regions or something in C# that can identify the different characters?

    Read the article

  • SQL SELECT across two tables

    - by Brett Spurrier
    Hi there, I am a little confused as to how to approach this SQL query. I have two takes (equal number of records), and I would like to return a column with which is the division between the two. In other words, here is my not-working-correctly query: SELECT( (SELECT v FROM Table1) / (SELECT DotProduct FROM Table2) ); How would I do this? All I want it a column where each row equals the same row in Table1 divided by the same row in Table2. The resulting table should have the same number of rows, but I am getting something with a lot more rows than the original two tables. I am at a complete loss. Any advice?

    Read the article

  • Weird behavior of an ASP.NET MVC application - track of errors

    - by Alex
    Before migrating from ASP.NET WebForms I had a very good way to monitor all my application errors in the Events Log (Administrative Tools). But now after moving to asp.net MVC, all I get is the same mistake occurring every minute (something about Site Master). I know it's not right, because there are other mistakes, but they are not displayed. I purposefully put in division by zero operation, and it didn't track it. I had to implement the OnException method of a controller, and send e-mails with error details which is very inconvenient. How can I solve this problem?

    Read the article

  • Why might different computers calculate different arithmetic results in VB.NET?

    - by Eyal
    I have some software written in VB.NET that performs a lot of calculations, mostly extracting jpegs to bitmaps and computing calculations on the pixels like convolutions and matrix multiplication. Different computers are giving me different results despite having identical inputs. What might be the reason? Edit: I can't provide the algorithm because it's proprietary but I can provide all the relevant operations: ULong \ ULong (Turuncating division) Bitmap.Load("filename.bmp') (Load a bitmap into memory) Bitmap.GetPixel(Integer, Integer) (Get a pixel's brightness) Double + Double Double * Double Math.Sqrt(Double) Math.PI Math.Cos(Double) ULong - ULong ULong * ULong ULong << ULong List.OrderBy(Of Double)(Func) Hmm... Is it possible that OrderBy is using a non-stable QuickSort and that QuickSort is using a random pivot? Edit: Just tested, nope. The sort is stable.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >