Search Results

Search found 6716 results on 269 pages for 'distributed algorithm'.

Page 94/269 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Echo mysql results in a loop?

    - by Roy D. Porter
    I am using turn.js to make a book. Every div within the 'deathnote' div becomes a new page. <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page </div> //ends book What I am doing is trying to get 3 'content' (content being a name and cause of death) divs onto 1 page, and then generate a new page. So here is what i want: <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"> //creates new page but leaves it open <div> CONTENT </div> <div> CONTENT </div> <div> CONTENT </div> </div> //ends the page </div> //ends book Seems simple enough, however the content is data from a MySQL DB, so i have to echo it in using PHP. Here is what i have so far <div id="deathnote"> <div style="background-image:url(images/coverpage.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <?php $pagecount = 0; $db = new mysqli('localhost', 'username', 'passw', 'DB'); if($db->connect_errno > 0){ die('Unable to connect to database [' . $db->connect_error . ']'); } $sql = <<<SQL SELECT * FROM `TABLE` SQL; if(!$result = $db->query($sql)){ die('There was an error running the query [' . $db->error . ']'); } //IGNORE ALL OF THE GARBAGE ABOVE. IT IS SIMPLE CONNECTING SCRIPT THAT I KNOW WORKS //THE METHOD I AM HAVING TROUBLE WITH IS BELOW $pagecount = 0; while($row = $result->fetch_assoc()){ //GETS THE VALUE (and makes sure it isn't nothing echo '<div style="background-image:url(images/paper.jpg);">'; //THIS OPENS A NEW PAGE while ($pagecount !== 3) { //KEEPS COUNT OF HOW MUCH CONTENT DIVS IS ON THE PAGE while($row = $result->fetch_assoc()){ //START A CONTENT DIV echo '<div class="content"><div class="name">' . $row['victim'] . '</div><div class="cod">' . $row['cod'] . '</div></div>'; //END A CONTENT DIV $pagecount++; //UP THE PAGE COUNT } } $pagecount=0; //PUT IT BACK TO 0 echo '</div>'; //END PAGE } $db->close(); ?> <div style="background-image:url(images/backpage.jpg);"></div> //BACK PAGE </div> At the moment i seem to be causing and infinite loop so the page won't load. The problem resides within the while loops. Any help is greatly appreciated. Thanks in advance guys. :)

    Read the article

  • Fast way to manually mod a number

    - by Nikolai Mushegian
    I need to be able to calculate (a^b) % c for very large values of a and b (which individually are pushing limit and which cause overflow errors when you try to calculate a^b). For small enough numbers, using the identity (a^b)%c = (a%c)^b%c works, but if c is too large this doesn't really help. I wrote a loop to do the mod operation manually, one a at a time: private static long no_Overflow_Mod(ulong num_base, ulong num_exponent, ulong mod) { long answer = 1; for (int x = 0; x < num_exponent; x++) { answer = (answer * num_base) % mod; } return answer; } but this takes a very long time. Is there any simple and fast way to do this operation without actually having to take a to the power of b AND without using time-consuming loops? If all else fails, I can make a bool array to represent a huge data type and figure out how to do this with bitwise operators, but there has to be a better way.

    Read the article

  • Where can I find soft-multiply and divide algorithms?

    - by srking
    I'm working on a micro-controller without hardware multiply and divide. I need to cook up software algorithms for these basic operations that are a nice balance of compact size and efficiency. My C compiler port will employ these algos, not the the C developers themselves. My google-fu is so far turning up mostly noise on this topic. Can anyone point me to something informative? I can use add/sub and shift instructions. Table lookup based algos might also work for me, but I'm a bit worried about cramming so much into the compiler's back-end...um, so to speak. Thanks!

    Read the article

  • How would you write a program to find the shortest pangram in a list of words?

    - by jonathanasdf
    Given a list of words which contains the letters a-z at least once, how would you write a program to find the shortest pangram counted by number of characters (not counting spaces) as a combination of the words? Since I am not sure whether short answers exist, this is not code golf, but rather just a discussion of how you would approach this. However, if you think you can manage to write a short program that would do this, then go ahead, and this might turn into code golf :)

    Read the article

  • A balanced binary search tree which is also a heap

    - by saeedn
    I'm looking for a data structure where each element in it has two keys. With one of them the structure is a BST and looking at the other one, data structure is a heap. With a little search, I found a structure called Treap. It uses the heap property with a random distribution on heap keys to make the BST balanced! What I want is a Balanced BST, which can be also a heap. The BST in Treap could be unbalanced if I insert elements with heap Key in the order of my choice. Is there such a data structure?

    Read the article

  • How to find longest common substring using trees?

    - by user384706
    The longest common substring problem according to wiki can be solved using a suffix tree. From wiki: The longest common substrings of a set of strings can be found by building a generalised suffix tree for the strings, and then finding the deepest internal nodes which have leaf nodes from all the strings in the subtree below it I don't get this. Example: if I have: ABCDE and XABCZ then the suffix tree is (some branches from XABCZ omitted due to space): The longest common substring is ABC but it is not I can not see how the description of wiki helps here. ABC is not the deepest internal nodes with leaf nodes. Any help to understand how this works?

    Read the article

  • How to find out top links in a website?

    - by Anil
    I want to know what are the best links in a site are? It may be pagerank wise or popularity wise. For example http://www.pragprog.com is a site. I want to find out what are the most relevant links this site has. The links should not be external pointing links. It should be of the same site. Do google or any similar site can tell such information?

    Read the article

  • Reverse regular expressions to generate data

    - by Anton Gogolev
    In one of the StackOverflow Podcasts (the one where guys were discussing data generation for testing DBs -- either #11 or #12), Jeff mentioned something like "reverse regular expressions", which are used exactly for that purpose: given a regex, produce a string which will eventually match said regex. What is the correct term for this whole concept? Is this a well-known concept?

    Read the article

  • Writing shorter code/algorithms, is more efficient (performance)?

    - by Carlos
    After coming across the code golf trivia around the site it is obvious people try to find ways to write code and algorithms as short as the possibly can in terms of characters, lines and total size, even if that means writing something like: n=input() while n>1:n=(n/2,n*3+1)[n%2];print n So as a beginner I start to wonder whether size actually matters :D. It is obviously a very subjective question highly dependent on the actual code being used, but what is the rule of thumb in the real world. In the case that size wont matter, how come then we don't focus more on performance rather than size?

    Read the article

  • Given an even number of vertices, how to find an optimum set of pairs based on proximity?

    - by Alex Z
    The problem: We have a set of n vertices in 3D euclidean space, and there is an even number of these vertices. We want to pair them up based on their proximity. In other words, we'd like to be able to find a set of vertex pairs, where the vertices in each pair are as close as possible together. We want to minimise sacrificing the proximity between the vertices of any other pairs as much as possible in doing this. I am not looking for the most optimal solution (if it even strictly exists/can be done), just a reasonable one that can be computed relatively quickly. A relatively awful brute force approach involves choosing a vertex and looping through the rest to find its nearest neighbor and then repeating until there are none left. Of course as we near the end of the list the closest vertex could be very far away, but it is the only choice, therefore this can fail badly on the third point above.

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • question about mergesort

    - by davit-datuashvili
    i have write code on mergesort here is code public class mergesort{ public static int a[]; public static void merges(int work[],int low,int high){ if (low==high) return ; else{ int mid=(low+high)/2; merges(work,low,mid); merges(work,mid+1,high); merge(work,low,mid+1,high); } } public static void main(String[]args){ int a[]=new int[]{64,21,33,70,12,85,44,99,36,108}; merges(a,0,a.length-1); for (int i=0;i<a.length;i++){ System.out.println(a[i]); } } public static void merge(int work[],int low,int high,int upper){ int j=0; int l=low; int mid=high-1; int n=upper-l+1; while(low<=mid && high<=upper) if ( a[low]<a[high]) work[j++]=a[low++]; else work[j++]=a[high++]; while(low<=mid) work[j++]=a[low++]; while(high<=upper) work[j++]=a[high++]; for (j=0;j<n;j++) a[l+j]=work[j]; } } but it does nort work after compile this code here is mistake java.lang.NullPointerException at mergesort.merge(mergesort.java:45) at mergesort.merges(mergesort.java:12) at mergesort.merges(mergesort.java:10) at mergesort.merges(mergesort.java:10) at mergesort.merges(mergesort.java:10) at mergesort.main(mergesort.java:27)

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • Explanation needed for sum of prime below n numbers

    - by Bala Krishnan
    Today I solved a problem given in Project Euler its problem no 10 and it took 7 hrs for my python program to show the result. But in that forum itself a person named lassevk posted solution for this and it took only 4 sec. And its not possible for me to post this question in that forum because its not discussion forum. So, think about this if you want to mark this question as non-constructive. marked = [0] * 2000000 value = 3 s = 2 while value < 2000000: if marked[value] == 0: s += value i = value while i < 2000000: marked[i] = 1 i += value value += 2 print s If any one understand this code please explain it simple as possible. Link to the Problem 10 question.

    Read the article

  • How can I generate an "unlimited" world?

    - by snowlord
    I would like to create a game with an endless (in reality an extremely large) world in which the player can move about. Whether or not I will ever get around to implement the game is one matter, but I find the idea interesting and would like some input on how to do it. The point is to have a world where all data is generated randomly on-demand, but in a deterministic way. Currently I focus on a large 2D map from which it should be possible to display any part without knowledge about the surrounding parts. I have implemented a prototype by writing a function that gives a random-looking, but deterministic, integer given the x and y of a pixel on the map (see my recent question about this function). Using this function I populate the map with "random" values, and then I smooth the map using a simple filter based on the surrounding pixels. This makes the map dependent on a few pixels outside its edge, but that's not a big problem. The final result is something that at least looks like a map (especially with a good altitude color map). Given this, one could maybe first generate a coarser map which is used to generate bigger differences in altitude to create mountain ranges and seas. Anyway, that was my idea, but I am sure that there exist ways to do this already and I also believe that given the specification, many of you can come up with better ideas. EDIT: Forgot the link to my question.

    Read the article

  • Find a repeated numbers out of 3 boxes

    - by james1
    I have 3 boxes, each box contain 10 piece of numbered paper (1 - 10) but there is a number the same in all 3 boxes eg: box1 has number 4 and box2 has number 4 and box3 also has number 4. How to find that repeated number in java with an efficient/fastest way possible?

    Read the article

  • Javascript Number Random Scrambler

    - by stjowa
    Hi, I need a Javascript random number scrambler for my website. Seems simple, but I can not figure out how to do it. Can anyone help me out? I have the following array of numbers: 1 2 3 4 5 6 7 8 9 I would like to be able to have these numbers scrambled randomly. Like the following: 3 6 4 2 9 5 1 8 7 or 4 1 7 3 5 9 2 6 8 So, specifically, I would like a function that takes in an array of numbers (1 - n) and then returns that same array of numbers - scrambled randomly with different calls to the function. Maybe a noob function, but can't seem to figure it out. Thanks!

    Read the article

  • Find common nodes from two linked lists using recursion

    - by Dan
    I have to write a method that returns a linked list with all the nodes that are common to two linked lists using recursion, without loops. For example, first list is 2 - 5 - 7 - 10 second list is 2 - 4 - 8 - 10 the list that would be returned is 2 - 10 I am getting nowhere with this.. What I have been think of was to check each value of the first list with each value of the second list recursively but the second list would then be cut by one node everytime and I cannot compare the next value in the first list with the the second list. I hope this makes sense... Can anyone help?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >