Search Results

Search found 2898 results on 116 pages for 'sum of digits'.

Page 20/116 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Python - Blackjack

    - by user335932
    def showCards(): #SUM sum = playerCards[0] + playerCards[1] #Print cards print "Player's Hand: " + str(playerCards) + " : " + "sum print "Dealer's Hand: " + str(compCards[0]) + " : " + "sum" compCards = [Deal(),Deal()] playerCards = [Deal(),Deal()] How can i add up the interger element of a list containing to values? under #SUM error is can combine lists like ints...

    Read the article

  • How to convert binary, OCT, HEX to calculate in Java?

    - by user316751
    Write an application that inputs one number consisting of FIVE digits from the user, separates the number into its individual digits and prints the digits separated from one another by three spaces each. For example, if the user types in the number 12345, the program should print 1 2 3 4 5 The following screen dump of result is for your reference. Input a digit: 12345 Digits in 12345 = 1 2 3 4 5 How to convert binary, OCT, HEX to calculate the question?

    Read the article

  • OBIA on Teradata - Part 2 Teradata DB Utilization for ETL

    - by Mohan Ramanuja
    Techniques to Monitor Queries and ETL Load CPU and Disk I/OSelect username, processor, sum(cputime), sum(diskio) from dbc.ampusage where processor ='1-0' order by 2,3 descgroup by 1,2;UserName    Vproc    Sum(CpuTime)    Sum(DiskIO)AC00916        10    6.71            24975 List Hardware ErrorsThere is a possibility that the system might have adequate disk space but out of free cylinders. In order to monitor hardware errors, the following query was used:Select * from dbc.Software_Event_Log where Text like '%restart%' order by thedate, thetime;For active users, usage of CPU and analysis of bad CPU to I/O ratiosSelect * from DBC.AMPUSAGE where username='CRMSTGC_DEV_ID';  AND SUBSTR(ACCOUNTNAME,6,3)='006'; Usage By I/OSelect AccountName, UserName, sum(CpuTime), sum(DiskIO)  from DBC.AMPUSAGE group by AccountName, UserName Order by Sum(DiskIO) desc; AccountName                       UserName                          Sum(CpuTime)  Sum(DiskIO)$M1$10062209                      AB89487                           374628.612    7821847$M1$10062210                      AB89487                           186692.244    2799412$M1$10062213                      COC_ETL_ID                        119531.068    331100426$M1$10062200                      AB63472                           118973.316    109881984$M1$10062204                      AB63472                           110825.356    94666986$M1$10062201                      AB63472                           110797.976    75016994$M1$10062202                      AC06936                           100924.448    407839702$M1$10062204                      AB67963                           0         4$M1$10062207                      AB91990                           0         2$M1$10062208                      AB63461                           0         24$M1$10062211                      AB84332                           0         6$M1$10062214                      AB65484                           0         8$M1$10062205                      AB77529                           0         58$M1$10062210                      AC04768                           0         36$M1$10062206                      AB54940                           0         22 Usage By CPUSelect AccountName, UserName, sum(CpuTime), sum(DiskIO)  from DBC.AMPUSAGE group by AccountName, UserName Order by Sum(CpuTime) desc;AccountName                       UserName                          Sum(CpuTime)  Sum(DiskIO)$M1$10062209                      AB89487                           374628.612    7821847$M1$10062210                      AB89487                           186692.244    2799412$M1$10062213                      COC_ETL_ID                        119531.068    331100426$M1$10062200                      AB63472                           118973.316    109881984$M1$10062204                      AB63472                           110825.356    94666986$M1$10062201                      AB63472                           110797.976    75016994$M2$100622105813004760047LOAD     T23_ETLPROC_ENT                   0 6$M1$10062215                      AA37720                           0     180$M1$10062209                      AB81670                           0     6Select count(distinct vproc) from dbc.ampusage;432select * from dbc.dbcinfo;AccountName     UserName     CpuTime DiskIO  CpuTimeNorm         Vproc VprocType    Model$M1$10062205                      CRM_STGC_DEV_ID                   0.32    1764    12.7423999023438    0     AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.28    1730    11.1495999145508    3     AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.304    1736    12.1052799072266    4    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.248    1731    9.87535992431641    7    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.332    1731    13.2202398986816    8    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.284    1712    11.3088799133301    11   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.24    1757    9.55679992675781    12    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.292    1737    11.6274399108887    15   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.268    1753    10.6717599182129    16   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.276    1732    10.9903199157715    19   AMP      2580select * from dbc.dbcinfo;InfoKey    InfoDataLANGUAGE   SUPPORT           MODE    StandardRELEASE    12.00.03.03VERSION    12.00.03.01a

    Read the article

  • How to execute a Ruby file in Java, capable of calling functions from the Java program and receiving primitive-type results?

    - by Omega
    I do not fully understand what am I asking (lol!), well, in the sense of if it is even possible, that is. If it isn't, sorry. Suppose I have a Java program. It has a Main and a JavaCalculator class. JavaCalculator has some basic functions like public int sum(int a,int b) { return a + b } Now suppose I have a ruby file. Called MyProgram.rb. MyProgram.rb may contain anything you could expect from a ruby program. Let us assume it contains the following: class RubyMain def initialize print "The sum of 5 with 3 is #{sum(5,3)}" end def sum(a,b) # <---------- Something will happen here end end rubyMain = RubyMain.new Good. Now then, you might already suspect what I want to do: I want to run my Java program I want it to execute the Ruby file MyProgram.rb When the Ruby program executes, it will create an instance of JavaCalculator, execute the sum function it has, get the value, and then print it. The ruby file has been executed successfully. The Java program closes. Note: The "create an instance of JavaCalculator" is not entirely necessary. I would be satisfied with just running a sum function from, say, the Main class. My question: is such possible? Can I run a Java program which internally executes a Ruby file which is capable of commanding the Java program to do certain things and get results? In the above example, the Ruby file asks the Java program to do a sum for it and give the result. This may sound ridiculous. I am new in this kind of thing (if it is possible, that is). WHY AM I ASKING THIS? I have a Java program, which is some kind of game engine. However, my target audience is a bunch of Ruby coders. I don't want to have them learn Java at all. So I figured that perhaps the Java program could simply offer the functionality (capacity to create windows, display sprites, play sounds...) and then, my audience can simply code with Ruby the logic, which basically justs asks my Java engine to do things like displaying sprites or playing sounds. That's when I though about asking this.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

  • project hours worked to be sum of tasks hours worked.

    - by silverkid
    i have a sharepoint list called project . this list has column called hours worked. Then i also have a list called tasks. this list also has a column called hours worked. the task list also has a lookup field where we select project ID from project list. Thus for each project we can have many tasks. now tasks list items are created by individual users and i have to create such a mechanism that the hours worked in project list must always be the sum of hours worked in tasks of that project. How can I achieve this.

    Read the article

  • How to measure sum of collected memory of Young Generation?

    - by Marcel
    Hi, I'd like to measure memory allocation data from my java application, i.e. the sum of the size of all objects that were allocated. Since object allocation is done in young generation this seems to be the right place. I know jconsole and I know the JMX beans but I just can't find the right variable... Right at the moment we are parsing the gc log output file but that's quite hard. Ideally we'd like to measure it via JMX... How can I get this value? Thanks, Marcel

    Read the article

  • C - how to get fscanf() to determine whether what it read is only digits, and no characters

    - by hatorade
    Imagine I have a csv with and each value is an integer. so the first value is the INTEGER 100. I want fscanf() to read this line, and either tell me it's an integer ONLY, or not. So, it would pass 100 but fail on 100t. What i've been trying to get work is "%d," where the comma is the delimiter of my CSV. so the whole function is fscanf(fp, "%d,", &count) Unfortunately, this fails to fail on '100t,' works on '100' and works on 't'. so it just isn't distinguishing between 100 and 100t (all of these numbers are followed by commas, of course

    Read the article

  • How do I sum up weighted arrays in PHP?

    - by christian studer
    Hod do I multiply the values of a multi-dimensional array with weigths and sum up the results into a new array in PHP or in general? The boring way looks like this: $weights = array(0.25, 0.4, 0.2, 0.15); $values = array ( array(5,10,15), array(20,25,30), array(35,40,45), array(50,55,60) ); $result = array(); for($i = 0; $i < count($values[0]); ++$i) { $result[$i] = 0; foreach($weights as $index => $thisWeight) $result[$i] += $thisWeight * $values[$index][$i]; } Is there a more elegant solution?

    Read the article

  • How to sum up values of an array in assembly?

    - by Pablo Fallas
    I have been trying to create a program which can sum up all the values of an "array" in assembly, I have done the following: ORG 1000H TABLE DB DUP(2,4,6,8,10,12,14,16,18,20) FIN DB ? TOTAL DB ? MAX DB 13 ORG 2000H MOV AL, 0 MOV CL, OFFSET FIN-OFFSET TABLE MOV BX, OFFSET TABLE LOOP: ADD AL, [BX] INC BX DEC CL JNZ LOOP HLT END BTW I am using msx88 to compile this code. But I get an error saying that the code 0 has not been recognized. Any advise?

    Read the article

  • In SQL, what does Group By mean without Count(*), or Sum(), Max(), avg(), ..., and what are some use

    - by Jian Lin
    In SQL, if we use Group By without Count(*) or Sum(), etc, then the result is as follows: mysql> select * from sentGifts; +--------+------------+--------+------+---------------------+--------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | +--------+------------+--------+------+---------------------+--------+ | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | | 2 | 2010-04-24 | 123 | 4568 | 2010-04-24 01:56:04 | 100 | | 3 | 2010-04-24 | 123 | NULL | NULL | 1 | | 4 | 2010-04-24 | NULL | 111 | 2010-04-24 03:10:42 | 2 | | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | | 6 | 2010-04-24 | 11 | 222 | 2010-04-24 03:54:49 | 6 | | 7 | 2010-04-24 | 1 | 2 | 2010-04-24 03:58:45 | 6 | +--------+------------+--------+------+---------------------+--------+ 7 rows in set (0.00 sec) mysql> select *, count(*) from sentGifts group by whenSent; +--------+------------+--------+------+---------------------+--------+----------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | count(*) | +--------+------------+--------+------+---------------------+--------+----------+ | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | 1 | | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | 6 | +--------+------------+--------+------+---------------------+--------+----------+ 2 rows in set (0.00 sec) mysql> select * from sentGifts group by whenSent; +--------+------------+--------+------+---------------------+--------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | +--------+------------+--------+------+---------------------+--------+ | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | +--------+------------+--------+------+---------------------+--------+ 2 rows in set (0.00 sec) Only 1 row is returned per "group". What does it mean when there is no "Count(*)", etc when using "Group By", and what are it uses? thanks.

    Read the article

  • Want to calculate the sum of the count rendered by group by option..

    - by Vijay
    i have a table with the columns such id, tid, companyid, ttype etc.. the id may be same for many companyid but unique within the companyid and tid is always unique and i want to calculate the total no of transactions entered in the table, a single transaction may be inserted in more than one row, for example, id tid companyid ttype 1 1 1 xxx 1 2 1 may be null 2 3 1 yyy 2 4 1 may be null 2 5 1 may be null the above entries should be counted as only 2 transactions .. it may be repeated for many companyids.. so how do i calculate the total no of transactions entered in the table i tried select sum(count(*)) from transaction group by id,companyId; but doesn't work select count(*) from transaction group by id; wont work because the id may be repeated for different companyids.

    Read the article

  • Given a number N, find the number of ways to write it as a sum of two or more consecutive integers

    - by hilal
    Here is the problem (Given a number N, find the number of ways to write it as a sum of two or more consecutive integers) and example 15 = 7+8, 1+2+3+4+5, 4+5+6 I solved with math like that : a + (a + 1) + (a + 2) + (a + 3) + ... + (a + k) = N (k + 1)*a + (1 + 2 + 3 + ... + k) = N (k + 1)a + k(k+1)/2 = N (k + 1)*(2*a + k)/2 = N Then check that if N divisible by (k+1) and (2*a+k) then I can find answer in O(N) time Here is my question how can you solve this by dynamic-programming ? and what is the complexity (O) ? P.S : excuse me, if it is a duplicate question. I searched but I can find

    Read the article

  • Count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers such that all numbers are co-prime to 'C'

    - by rajneesh2k10
    I came across the solution of a problem which involve dynamic-programming approach, solved using a three dimensional matrix. Link to actual problem is: http://community.topcoder.com/stat?c=problem_statement&pm=12189&rd=15177 Solution to this problem is here under MuddyRoad2: http://apps.topcoder.com/wiki/display/tc/SRM+555 In the last paragraph of explanation, author describes a dynamic programming approach to count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers (not necessarily different), such that every number is co-prime to 3 and the order in which these numbers appear does matter. I am not able to grasp that approach. Can anyone help me understand how DP is acting here. I can't understand what is a state here and how it is derived from the previous state.

    Read the article

  • Shortest command to calculate the sum of a column of output on Unix?

    - by Andrew
    I'm sure there is a quick and easy way to calculate the sum of a column of values on Unix systems (using something like awk or xargs perhaps), but writing a shell script to parse the rows line by line is the only thing that comes to mind at the moment. For example, what's the simplest way to modify the command below to compute and display the total for the SEGSZ column (70300)? ipcs -mb | head -6 IPC status from /dev/kmem as of Mon Nov 17 08:58:17 2008 T ID KEY MODE OWNER GROUP SEGSZ Shared Memory: m 0 0x411c322e --rw-rw-rw- root root 348 m 1 0x4e0c0002 --rw-rw-rw- root root 61760 m 2 0x412013f5 --rw-rw-rw- root root 8192

    Read the article

  • How to turn this simple 10 digit hex number back into 8 digits?

    - by Babil
    The algorithm to convert input 8 digit hex number into 10 digit are following: Given that the 8 digit number is: '12 34 56 78' x1 = 1 * 16^8 * 2^3 x2 = 2 * 16^7 * 2^2 x3 = 3 * 16^6 * 2^1 x4 = 4 * 16^4 * 2^4 x5 = 5 * 16^3 * 2^3 x6 = 6 * 16^2 * 2^2 x7 = 7 * 16^1 * 2^1 x8 = 8 * 16^0 * 2^0 Final 10 digit hex is: = x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 = '08 86 42 98 E8' The problem is - how to go back to 8 digit hex from a given 10 digit hex (for example: 08 86 42 98 E8 to 12 34 56 78) Some sample input and output are following: input output 11 11 11 11 08 42 10 84 21 22 22 33 33 10 84 21 8C 63 AB CD 12 34 52 D8 D0 88 64 45 78 96 32 21 4E 84 98 62 FF FF FF FF 7B DE F7 BD EF

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • How to use the sum the value of 2 totals in different table (Reporting Services)?

    - by dewacorp.alliances
    Hi there In report design, I have 2 tables (Current and Proposed) the structure like this: Current Parameter | Value | Rate | Total Value ... Proposed Parameter | Value | Rate | Total Value ... Each bottom of the table (Table Footer), I have something called: "Total: " which is a sum of Total field. I called these textboxes are txtbxCurrent and txtbxProposed and the format is in currency already. This thing is running well. But now I need to get a total of these txtbxCurrent and txtbxProposed. How do I do this? Can I take the value of this or not? BTW .. I am using Ms SQL Server 2005 (ReportViewer - client) Also here my dataset looks like: RecID | Type | Parameter | Value | Rate | Total 1, CURRENT, 'Param1', 100, 0.1, 10 1, CURRENT, 'Param2', 200, 0.2, 10 1, PROPOSED, 'Param1', 100, 0.2, 20 1, PROPOSED, 'Param2', 200, 0.2, 20 Thanks

    Read the article

  • What do these C operators mean?

    - by Melkhiah66
    I'm reading the book "Programming Challenges: The Programming Contest Training Manual" and are implementing a problem where I do not understand the use of operators c1 and the comparison if (n&1), someone could help me to know they mean? this is the example code #include <stdio.h> #define MAX_N 300 #define MAX_D 150 long cache[MAX_N/2][2]; void make_cache(int n,int d,int mode) { long tmp[MAX_D]; int i,count; for(i=0;i<MAX_D;i++) tmp[i]=0; tmp[0]=1;count=0; while(count<=n) { count++; for(i=(count&1);i<=d;i+=2) { if(i) tmp[i] = tmp[i-1] + tmp[i+1]; else if(!mode) tmp[0]=tmp[1]; else tmp[0]=0; } if((count&1)==(d&1)) cache[count>>1][mode]=tmp[d]; } } int main() { int n,d,i; long sum; while(1) { scanf("%d %d",&n,&d); if(n&1) sum=0; else if(d==1) sum=1; else if(n<(d<<1)) sum=0; else if(n==(d<<1)) sum=1; else { make_cache(n,d,0); make_cache(n,d,1); sum=0; for(i=0;i<=(n>>1);i++) sum+=cache[i][0]*cache[(n>>1)-i][1]; } printf("%ld\n",sum); } return 0; }

    Read the article

  • Programming help Loop adding

    - by Deonna
    I know this probably really simple but Im not sure what im doing wrong... The assignment states: For the second program for this lab, you are to have the user enter an integer value in the range of 10 to 50. You are to verify that the user enters a value in that range, and continue to prompt him until he does give you a value in that range. After the user has successfully entered a value in that range, you are to display the sum of all the integers from 1 to the value entered. I have this so far: #include <iostream.h> int main () { int num, sum; cout << "do-while Loop Example 2" << endl << endl; do { cout << "Enter a value from 10 to 50: "; cin >> num; if (num < 10 || num > 50) cout << "Out of range; Please try again..." << endl; } while (num < 10 || num > 50); { int i; int sum = 0; for (num = 1; num <= 50; num ++) sum = sum + num; } cout << endl << "The sum is " << sum << endl; return 0; } Im just not sure exactly what i'm doing wrong... I keep getting the wrong sum for the total...

    Read the article

  • How to refactor this duplicated LINQ code?

    - by benrick
    I am trying to figure out how to refactor this LINQ code nicely. This code and other similar code repeats within the same file as well as in other files. Sometime the data being manipulated is identical and sometimes the data changes and the logic remains the same. Here is an example of duplicated logic operating on different fields of different objects. public IEnumerable<FooDataItem> GetDataItemsByColor(IEnumerable<BarDto> dtos) { double totalNumber = dtos.Where(x => x.Color != null).Sum(p => p.Number); return from stat in dtos where stat.Color != null group stat by stat.Color into gr orderby gr.Sum(p => p.Number) descending select new FooDataItem { Color = gr.Key, NumberTotal = gr.Sum(p => p.Number), NumberPercentage = gr.Sum(p => p.Number) / totalNumber }; } public IEnumerable<FooDataItem> GetDataItemsByName(IEnumerable<BarDto> dtos) { double totalData = dtos.Where(x => x.Name != null).Sum(v => v.Data); return from stat in dtos where stat.Name != null group stat by stat.Name into gr orderby gr.Sum(v => v.Data) descending select new FooDataItem { Name = gr.Key, DataTotal = gr.Sum(v => v.Data), DataPercentage = gr.Sum(v => v.Data) / totalData }; } Anyone have a good way of refactoring this?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >