Search Results

Search found 8557 results on 343 pages for 'infinite loop'.

Page 126/343 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Python, dictionaries, and chi-square contingency table

    - by rohanbk
    I have a file which contains several lines in the following format (word, time that the word occurred in, and frequency of documents containing the given word within the given instance in time): #inputfile <word, time, frequency> apple, 1, 3 banana, 1, 2 apple, 2, 1 banana, 2, 4 orange, 3, 1 I have Python class below that I used to create 2-D dictionaries to store the above file using as the key, and frequency as the value: class Ddict(dict): ''' 2D dictionary class ''' def __init__(self, default=None): self.default = default def __getitem__(self, key): if not self.has_key(key): self[key] = self.default() return dict.__getitem__(self, key) wordtime=Ddict(dict) # Store each inputfile entry with a <word,time> key timeword=Ddict(dict) # Store each inputfile entry with a <time,word> key # Loop over every line of the inputfile for line in open('inputfile'): word,time,count=line.split(',') # If <word,time> already a key, increment count try: wordtime[word][time]+=count # Otherwise, create the key except KeyError: wordtime[word][time]=count # If <time,word> already a key, increment count try: timeword[time][word]+=count # Otherwise, create the key except KeyError: timeword[time][word]=count The question that I have pertains to calculating certain things while iterating over the entries in this 2D dictionary. For each word 'w' at each time 't', calculate: The number of documents with word 'w' within time 't'. (a) The number of documents without word 'w' within time 't'. (b) The number of documents with word 'w' outside time 't'. (c) The number of documents without word 'w' outside time 't'. (d) Each of the items above represents one of the cells of a chi-square contingency table for each word and time. Can all of these be calculated within a single loop or do they need to be done one at a time? Ideally, I would like the output to be what's below, where a,b,c,d are all the items calculated above: print "%s, %s, %s, %s" %(a,b,c,d)

    Read the article

  • X++ Coming Out Of QueryRun In Fetch Method

    - by will
    I can't seem to find the resolution for this. I have modified the Fetch method in a report, so that if the queryRun is changed, and the new ID is fetched, then the while loop starts over and a new page appears and 2 elements are executed. This part works fine, the next part does not, in each ID there are several Records which I am using Element.Execute(); and element.Send(); to process. What happens is, the first ID is selected, the element (body) of the reports is executed and the element is sent as expected, however the while loop does not go onto the next ID? Here is the code; public boolean fetch() { APMPriorityId oldVanId, newVanId; LogisticsControlTable lLogisticsControlTable; int64 cnt, counter; ; queryRun = new QueryRun(this); if (!queryRun.prompt() || !element.prompt()) { return false; } while (queryRun.next()) { if (queryRun.changed(tableNum(LogisticsControlTable))) { lLogisticsControlTable = queryRun.get(tableNum(LogisticsControlTable)); if (lLogisticsControlTable) { info(lLogisticsControlTable.APMPriorityId); cnt = 0; oldVanId = newVanId; newVanId = lLogisticsControlTable.APMPriorityId; if(newVanId) { element.newPage(); element.execute(1); element.execute(2); } } if (lLogisticsControlTable.APMPriorityId) select count(recId) from lLogisticsControlTable where lLogisticsControlTable.APMPriorityId == newVanId; counter = lLogisticsControlTable.RecId; while select lLogisticsControlTable where lLogisticsControlTable.APMPriorityId == newVanId { cnt++; if(lLogisticsControlTable.APMPriorityId == newVanId && cnt <= counter) { element.execute(3); element.send(lLogisticsControlTable); } } } } return true; }

    Read the article

  • Getting GPS data?

    - by svebee
    Inside public class IAmHere extends Activity implements LocationListener { i have @Override public void onLocationChanged(Location location) { // TODO Auto-generated method stub } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } and inside public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.iamhere); i have LocationManager lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); List<String> providers = lm.getProviders(true); /* Loop over the array backwards, and if you get an accurate location, then break out the loop*/ Location l = null; for (int i=providers.size()-1; i>=0; i--) { l = lm.getLastKnownLocation(providers.get(i)); if (l != null) break; } double[] gps = new double[2]; if (l != null) { gps[0] = l.getLatitude(); gps[1] = l.getLongitude(); } gpsString = (TextView)findViewById(R.id.gpsString); String Data = ""; String koordinata1 = Double.toString(gps[0]); String koordinata2 = Double.toString(gps[1]); Data = Data + koordinata1 + " | " + koordinata2 + "\n"; gpsString.setText(String.valueOf(Data)); but seems it's not working? Why? I mean even emulator doesn't want to send GPS data - When I click "send" via UI or console, nothing happens...? Thank you.

    Read the article

  • Signals and Variables in VHDL - Problem

    - by Morano88
    I have a signal and this signal is a bitvector. The length of the bitvector depends on an input n, it is not fixed. In order to find the length, I have to do some computations. Can I define a signal after defining the variables ? It is ggiving me errors when I do that. It is working fine If I keep the signal before the variables .. but I don't want that .. the length of Z depends on the computations of the variables. What is the solution ? library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity BSD_Full_Comp is Generic (n:integer:=8); Port(X, Y : inout std_logic_vector(n-1 downto 0); FZ : out std_logic_vector(1 downto 0)); end BSD_Full_Comp; architecture struct of BSD_Full_Comp is Component BSD_BitComparator Port ( Ai_1 : inout STD_LOGIC; Ai_0 : inout STD_LOGIC; Bi_1 : inout STD_LOGIC; Bi_0 : inout STD_LOGIC; S1 : out STD_LOGIC; S0 : out STD_LOGIC ); END Component; Signal Z : std_logic_vector(2*n-3 downto 0); begin ass : process Variable length : integer := n; Variable pow : integer :=0 ; Variable ZS : integer :=0; begin while length /= 0 loop length := length/2; pow := pow+1; end loop; length := 2 ** pow; ZS := length - n; wait; end process; end struct;

    Read the article

  • What would be the safest way to store objects of classes derived from a common interface in a common

    - by Svenstaro
    I'd like to manage a bunch of objects of classes derived from a shared interface class in a common container. To illustrate the problem, let's say I'm building a game which will contain different actors. Let's call the interface IActor and derive Enemy and Civilian from it. Now, the idea is to have my game main loop be able to do this: // somewhere during init std::vector<IActor> ActorList; Enemy EvilGuy; Civilian CoolGuy; ActorList.push_back(EvilGuy); ActorList.push_back(CoolGuy); and // main loop while(!done) { BOOST_FOREACH(IActor CurrentActor, ActorList) { CurrentActor.Update(); CurrentActor.Draw(); } } ... or something along those lines. This example obviously won't work but that is pretty much the reason I'm asking here. I'd like to know: What would be the best, safest, highest-level way to manage those objects in a common heterogeneous container? I know about a variety of approaches (Boost::Any, void*, handler class with boost::shared_ptr, Boost.Pointer Container, dynamic_cast) but I can't decide which would be the way to go here. Also I'd like to emphasize that I want to stay away as far as possible from manual memory management or nested pointers. Help much appreciated :).

    Read the article

  • find(:all) and then add data from another table to the object

    - by Koning Baard XIV
    I have two tables: create_table "friendships", :force => true do |t| t.integer "user1_id" t.integer "user2_id" t.boolean "hasaccepted" t.datetime "created_at" t.datetime "updated_at" end and create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "phone" t.boolean "gender" t.datetime "created_at" t.datetime "updated_at" t.string "firstname" t.string "lastname" t.date "birthday" end I need to show the user a list of Friendrequests, so I use this method in my controller: def getfriendrequests respond_to do |format| case params[:id] when "to_me" @friendrequests = Friendship.find(:all, :conditions => { :user2_id => session[:user], :hasaccepted => false }) when "from_me" @friendrequests = Friendship.find(:all, :conditions => { :user1_id => session[:user], :hasaccepted => false }) end format.xml { render :xml => @friendrequests } format.json { render :json => @friendrequests } end end I do nearly everything using AJAX, so to fetch the First and Last name of the user with UID user2_id (the to_me param comes later, don't worry right now), I need a for loop which make multiple AJAX calls. This sucks and costs much bandwidth. So I'd rather like that getfriendrequests also returns the First and Last name of the corresponding users, so, e.g. the JSON response would not be: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3 } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4 } } ] but rather: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3, "firstname": "Jon", "lastname": "Skeet" } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4, "firstname": "Mark", "lastname": "Gravell" } } ] I thought of a for loop in the getfriendrequests method, but I don't know how to implement this, and maybe there is an easier way. It must also work for XML. Can anyone help me? Thanks

    Read the article

  • Jquery - custom countdown

    - by matthewsteiner
    So I found this countdown at http://davidwalsh.name/jquery-countdown-plugin, I altered it a little bit: jQuery.fn.countDown = function(settings,to) { settings = jQuery.extend({ duration: 1000, startNumber: $(this).text(), endNumber: 0, callBack: function() { } }, settings); return this.each(function() { //where do we start? if(!to && to != settings.endNumber) { to = settings.startNumber; } //set the countdown to the starting number $(this).text(to); //loopage $(this).animate({ 'fontSize': settings.endFontSize },settings.duration,'',function() { if(to > settings.endNumber + 1) { $(this).text(to - 1).countDown(settings,to - 1); } else { settings.callBack(this); } }); }); }; Then I have this code: $(document).ready(function(){ $('.countdown').countDown({ callBack: function(me){ $(me).text('THIS IS THE TEXT'); } }); }); I don't mind taking everything out of the "animate" loop; I'd prefer that since nothing needs to be animated. (I don't need the font size to change). So everything's working to a point. I have a span with class countdown and whatever is in it when the page is refreshed goes down second by second. However, I need it to be formatted in M:S format. So, my two questions: 1) What can I use instead of animate to take care of the loop yet maintain the callback 2) How (where in the code should I) can I play with the time format? Thanks.

    Read the article

  • Using multiple sockets, is non-blocking or blocking with select better?

    - by JPhi1618
    Lets say I have a server program that can accept connections from 10 (or more) different clients. The clients send data at random which is received by the server, but it is certain that at least one client will be sending data every update. The server cannot wait for information to arrive because it has other processing to do. Aside from using asynchronous sockets, I see two options: Make all sockets non-blocking. In a loop, call recv on each socket and allow it to fail with WSAEWOULDBLOCK if there is no data available and if I happen to get some data, then keep it. Leave the sockets as blocking. Add all sockets to a fd_set and call select(). If the return value is non-zero (which it will be most of the time), loop through all the sockets to find the appropriate number of readable sockets with FD_ISSET() and only call recv on the readable sockets. The first option will create a lot more calls to the recv function. The second method is a bigger pain from a programming perspective because of all the FD_SET and FD_ISSET looping. Which method (or another method) is preferred? Is avoiding the overhead on letting recv fail on a non-blocking socket worth the hassle of calling select()? I think I understand both methods and I have tried both with success, but I don't know if one way is considered better or optimal. Only knowledgeable replies please!

    Read the article

  • Learning Hibernate: too many connections

    - by stivlo
    I'm trying to learn Hibernate and I wrote the simplest Person Entity and I was trying to insert 2000 of them. I know I'm using deprecated methods, I will try to figure out what are the new ones later. First, here is the class Person: @Entity public class Person { private int id; private String name; @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "person") @TableGenerator(name = "person", table = "sequences", allocationSize = 1) public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Then I wrote a small App class that insert 2000 entities with a loop: public class App { private static AnnotationConfiguration config; public static void insertPerson() { SessionFactory factory = config.buildSessionFactory(); Session session = factory.getCurrentSession(); session.beginTransaction(); Person aPerson = new Person(); aPerson.setName("John"); session.save(aPerson); session.getTransaction().commit(); } public static void main(String[] args) { config = new AnnotationConfiguration(); config.addAnnotatedClass(Person.class); config.configure("hibernate.cfg.xml"); //is the default already new SchemaExport(config).create(true, true); //print and execute for (int i = 0; i < 2000; i++) { insertPerson(); } } } What I get after a while is: Exception in thread "main" org.hibernate.exception.JDBCConnectionException: Cannot open connection Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections Now I know that probably if I put the transaction outside the loop it would work, but mine was a test to see what happens when executing multiple transactions. And since there is only one open at each time, it should work. I tried to add session.close() after the commit, but I got Exception in thread "main" org.hibernate.SessionException: Session was already closed So how to solve the problem?

    Read the article

  • CS 50- Pset 1 Mario Program

    - by boametaphysica
    the problem set asks us to create a half pyramid using hashes. Here is a link to an image of how it should look- I get the idea and have written the program until printing the spaces (which I have replaced by "_" just so that I can test the first half of it. However, when I try to run my program, it doesn't go beyond the do-while loop. In other words, it keeps asking me for the height of the pyramid and does not seem to run the for loop at all. I've tried multiple approaches but this problem seems to persist. Any help would be appreciated! Below is my code- # include <cs50.h> # include <stdio.h> int main(void) { int height; do { printf("Enter the height of the pyramid: "); height = GetInt(); } while (height > 0 || height < 24); for (int rows = 1; rows <= height, rows++) { for (int spaces = height - rows; spaces > 0; spaces--) { printf("_"); } } return 0; } Running this program yields the following output- Enter the height of the pyramid: 11 Enter the height of the pyramid: 1231 Enter the height of the pyramid: aawfaf Retry: 12 Enter the height of the pyramid:

    Read the article

  • How can I display multiple django modelformset forms in a grouped fieldsets?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results (after form validation) base_models = base_formset.save(commit=False) like_models = like_formset.save(commit=False) for base_model, likes_model in map(None, base_models, likes_models): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements. EDIT: Updated the problem statement to be more clear about what exactly I'm processing (I'm processing models not forms in the for loop)

    Read the article

  • Looping through a SimpleXML object, or turning the whole thing into an array.

    - by Coffee Cup
    I'm trying to work out how to iterate though a returned SimpleXML object. I'm using a toolkit called Tarzan AWS, which connects to Amazon Web Services (SimpleDB, S3, EC2, etc). I'm specifically using SimpleDB. I can put data into the Amazon SimpleDB service, and I can get it back. I just don't know how to handle the SimpleXML object that is returned. The Tarzan AWS documentation says this: Look at the response to navigate through the headers and body of the response. Note that this is an object, not an array, and that the body is a SimpleXML object. Here's a sample of the returned SimpleXML object: [body] = SimpleXMLElement Object ( [QueryWithAttributesResult] = SimpleXMLElement Object ( [Item] = Array ( [0] = SimpleXMLElement Object ( [Name] = message12413344443260 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = john ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is a message. ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334444 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413344443260 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.1 ) ) ) [1] = SimpleXMLElement Object ( [Name] = message12413346907303 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = fred ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is another message ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334690 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413346907303 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.2 ) ) ) ) So what code do I need to get through each of the object items? I'd like to loop through each of them and handle it like a returned mySQL query. For example, I can query SimpleDB and then loop though the SimpleXML so I can display the results on the page. Alternatively, how do you turn the whole shebang into an array? I'm new to SimpleXML, so I apologise if my questions aren't specific enough.

    Read the article

  • How do I break down MySQL query results into categories, each with a specific number of rows?

    - by Mel
    Hello, Problem: I want to list n number of games from each genre (order not important) The following MySQL query resides inside a ColdFusion function. It is meant to list all games under a platform (for example, list all PS3 games; list all Xbox 360 games; etc...). The variable for PlatformID is passed through the URL. I have 9 genres, and I would like to list 10 games from each genre. SELECT games.GameID AS GameID, games.GameReleaseDate AS rDate, titles.TitleName AS tName, titles.TitleShortDescription AS sDesc, genres.GenreName AS gName, platforms.PlatformID, platforms.PlatformName AS pName, platforms.PlatformAbbreviation AS pAbbr FROM (((games join titles on((games.TitleID = titles.TitleID))) join genres on((genres.GenreID = games.GenreID))) join platforms on((platforms.PlatformID = games.PlatformID))) WHERE (games.PlatformID = '#ARGUMENTS.PlatformID#') ORDER BY GenreName ASC, GameReleaseDate DESC Once the query results come back I group them in ColdFusion as follows: <cfoutput query="ListGames" group="gName"> (first loop which lists genres) #ListGames.gName# <cfoutput> (nested loop which lists games) #ListGames.tName# </cfoutput> </cfoutput> The problem is that I only want 10 games from each genre to be listed. If I place a "limit" of 50 in the SQL, I will get ~ 50 games of the same genre (depending on how much games of that genre there are). The second issue is I don't want the overload of querying the database for all games when each person will only look at a few. What is the correct way to do this? Many thanks!

    Read the article

  • Background worker not working right

    - by vbNewbie
    I have created a background worker to go and run a pretty long task that includes creating more threads which will read from a file of urls and crawl each. I tried following it through debugging and found that the background process ends prematurely for no apparent reason. Is there something wrong in the logic of my code that is causing this. I will try and paste as much as possible to make sense. While Not myreader.EndOfData Try currentRow = myreader.ReadFields() Dim currentField As String For Each currentField In currentRow itemCount = itemCount + 1 searchItem = currentField generateSearchFromFile(currentField) processQuerySearch() Next Catch ex As Microsoft.VisualBasic.FileIO.MalformedLineException Console.WriteLine(ex.Message.ToString) End Try End While This first bit of code is the loop to input from file and this is what the background worker does. The next bit of code is where the background worker creates threads to work all the 'landingPages'. After about 10 threads are created the background worker exits this sub and skips the file input loop and exits the program. Try For Each landingPage As String In landingPages pgbar.Timer1.Stop() If VisitedPages.Contains(landingPage) Then Continue For Else Dim thread = New Thread(AddressOf processQuery) count = count + 1 thread.Name = "Worm" & count thread.Start(landingPage) If numThread >= 10 Then For Each thread In ThreadList thread.Join() Next numThread = 0 Continue For Else numThread = numThread + 1 SyncLock ThreadList ThreadList.Add(thread) End SyncLock End If End If Next

    Read the article

  • Diffie-Hellman -- Primitive root mod n -- cryptography question.

    - by somewhat confused
    In the below snippet, please explain starting with the first "for" loop what is happening and why. Why is 0 added, why is 1 added in the second loop. What is going on in the "if" statement under bigi. Finally explain the modPow method. Thank you in advance for meaningful replies. public static boolean isPrimitive(BigInteger m, BigInteger n) { BigInteger bigi, vectorint; Vector<BigInteger> v = new Vector<BigInteger>(m.intValue()); int i; for (i=0;i<m.intValue();i++) v.add(new BigInteger("0")); for (i=1;i<m.intValue();i++) { bigi = new BigInteger("" + i); if (m.gcd(bigi).intValue() == 1) v.setElementAt(new BigInteger("1"), n.modPow(bigi,m).intValue()); } for (i=0;i<m.intValue();i++) { bigi = new BigInteger("" + i); if (m.gcd(bigi).intValue() == 1) { vectorint = v.elementAt(bigi.intValue()); if ( vectorint.intValue() == 0) i = m.intValue() + 1; } } if (i == m.intValue() + 2) return false; else return true; }

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Signals and Variables in VHDL (order) - Problem

    - by Morano88
    I have a signal and this signal is a bitvector (Z). The length of the bitvector depends on an input n, it is not fixed. In order to find the length, I have to do some computations. Can I define a signal after defining the variables ? It is giving me errors when I do that. It is working fine If I keep the signal before the variables (that what is showing below) .. but I don't want that .. the length of Z depends on the computations of the variables. What is the solution ? library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity BSD_Full_Comp is Generic (n:integer:=8); Port(X, Y : inout std_logic_vector(n-1 downto 0); FZ : out std_logic_vector(1 downto 0)); end BSD_Full_Comp; architecture struct of BSD_Full_Comp is Component BSD_BitComparator Port ( Ai_1 : inout STD_LOGIC; Ai_0 : inout STD_LOGIC; Bi_1 : inout STD_LOGIC; Bi_0 : inout STD_LOGIC; S1 : out STD_LOGIC; S0 : out STD_LOGIC ); END Component; Signal Z : std_logic_vector(2*n-3 downto 0); begin ass : process Variable length : integer := n; Variable pow : integer :=0 ; Variable ZS : integer :=0; begin while length /= 0 loop length := length/2; pow := pow+1; end loop; length := 2 ** pow; ZS := length - n; wait; end process; end struct;

    Read the article

  • IList<Item> Collection Class accessing database

    - by Mike
    Hi, I have a database with Users. Users have Items. These Items can change actively. How do you access the items in a collection type format? For the user, I fill all the user properties at the time of instantiation. If I load the user's items at the time of the instantiation, and the items change, they will have old data. I was thinking, maybe I need an ItemCollection class and have that a field/property apart of the user class, that way to traverse all the user's items I could use a foreach loop. So, my question is, what is the best practice/best way of accessing the items from a database using some sort of collection? On accessing the particular Item, it needs to get the latest database information, and when the user does do a foreach loop, the latest item information must be available. I.e. What I'm trying to do Console.WriteLine(User.Items[3].ID); returns 5. //this updates the item information and saves it to the database. User.Items[3].ID = 13; //Add a new item to the database. User.Items.Add(new Item { id = 17}); foreach (Item item in User.Items) { //this would traverse all items in the database. //not some cached copy at the time of instantiation of the user. }

    Read the article

  • Need a code snippet for backward paging...

    - by Ali
    Hi guys I'm in a bit on a fix here. I know how easy it is to build simple pagination links for dynamic pages whereby you can navigate between partial sets of records from sql queries. However the situation I have is as below: COnsider that I wish to paginate between records listed in a flat file - I have no problem with the retrieval and even the pagination assuming that the flat file is a csv file with the first field as an id and new reocrds on new lines. However I need to make a pagination system which paginates backwards i.e I want the LAST entry in the file to appear as the first as so forth. Since I don't have the power of sql to help me here I'm kinda stuck - all I have is a fixed sequence which needs to be paginated, also note that the id mentioned as first field is not necessarily numeric so forget about sorting by numerics here. I basically need a way to loop through the file but backwards and paginate it as such. How can I do that - I'm working in php - I just need the code to loop through and paginate i.e how to tell which is the offset and which is the current page etc.

    Read the article

  • maintaining continuous count in php

    - by LiveEn
    I have a small problem maintain a count for the position. i have written a function function that will select all the users within a page and positions them in the order. Eg: Mike Position 1 Steve Postion 2.............. .... Jacob Position 30 but the problem that i have when i move to the second page, the count is started from first Eg: Jenny should be number 31 but the list goes, Jenny Position 1 Tanya Position 2....... Below is my function function nrk($duty,$page,$position) { $url="http://www.test.com/people.php?q=$duty&start=$page"; $ch=curl_init(); curl_setopt($ch,CURLOPT_URL,$url); $result=curl_exec($ch); $dom = new DOMDocument(); @$dom->loadHTML($result); $xpath=new DOMXPath($dom); $elements = $xpath->evaluate("//div"); foreach ($elements as $element) { $name = $element->getElementsByTagName("name")->item(0)->nodeValue; $position=$position+1; echo $name." Position:".$position."<br>"; } return $position; } Below is the for loop where i try to loop thru the page count for ($page=0;$page<=$pageNumb;$page=$page + 10) { nrk($duty,$page,$position); } I dont want to maintain a array key value in the for each coz i drop certain names...

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • How do I start up an NSRunLoop, and ensure that it has an NSAutoreleasePool that gets emptied?

    - by Nick Forge
    I have a "sync" task that relies on several "sub-tasks", which include asynchronous network operations, but which all require access to a single NSManagedObjectContext. Due to the threading requirements of NSManagedObjectContexts, I need every one of these sub-tasks to execute on the same thread. Due to the amount of processing being done in some of these tasks, I need them to be on a background thread. At the moment, I'm launching a new thread by doing this in my singleton SyncEngine object's -init method: [self performSelectorInBackground:@selector(initializeSyncThread) withObject:nil]; The -initializeSyncThread method looks like this: - (void)initializeSyncThread { self.syncThread = [NSThread currentThread]; self.managedObjectContext = [(MyAppDelegate *)[UIApplication sharedApplication].delegate createManagedObjectContext]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; [runLoop run]; } Is this the correct way to start up the NSRunLoop for this thread? Is there a better way to do it? The run loop only needs to handle 'performSelector' sources, and it (and its thread) should be around for the lifetime of the process. When it comes to setting up an NSAutoreleasePool, should I do this by using Run Loop Observers to create the autorelease pool and drain it after every run-through?

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Masspay and MySql

    - by Mike
    Hi, I am testing Paypal's masspay using their 'MassPay NVP example' and I having difficulty trying to amend the code so inputs data from my MySql database. Basically I have user table in MySql which contains email address, status of payment (paid,unpaid) and balance. CREATE TABLE `users` ( `user_id` int(10) unsigned NOT NULL auto_increment, `email` varchar(100) collate latin1_general_ci NOT NULL, `status` enum('unpaid','paid') collate latin1_general_ci NOT NULL default 'unpaid', `balance` int(10) NOT NULL default '0', PRIMARY KEY (`user_id`) ) ENGINE=MyISAM AUTO_INCREMENT=6 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci Data : 1 [email protected] paid 100 2 [email protected] unpaid 11 3 [email protected] unpaid 20 4 [email protected] unpaid 1 5 [email protected] unpaid 20 6 [email protected] unpaid 15 I then have created a query which selects users with an unpaid balance of $10 and above : $conn = db_connect(); $query=$conn->query("SELECT * from users WHERE balance >='10' AND status = ('unpaid')"); What I would like to is for each record returned from the query for it to populate the code below: Now the code which I believe I need to amend is as follows: for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount",); $receiversArray[$i] = $receiverData; } However I just can't get it to work, I have tried using mysqli_fetch_array and then replaced "[email protected]" with $row['email'] and "example_amount" with row['balance'] in various methods of coding but it doesn't work. Also I need it to loop to however many rows that were retrieved from the query as <3 in the for loop above. So the end result I am looking for is for the $nvpStr string to pass with something like this: $nvpStr = "&EMAILSUBJECT=test&RECEIVERTYPE=EmailAddress&CURRENCYCODE=USD&[email protected]&L_Amt=11&[email protected]&L_Amt=11&[email protected]&L_Amt=20&[email protected]&L_Amt=20&[email protected]&L_Amt=15"; Thanks

    Read the article

  • Returned JSON from Twitter and displaying tweets using FlexSlider

    - by Trey Copeland
    After sending a request to the Twitter API using geocode, I'm getting back a json response with a list of tweets. I then that into a php array using json_decode() and use a foreach loop to output what I need. I'm using flex slider to show the tweets in a vertical fashion after wrapping them in a list. So what I want is for it to only show 10 tweets at a time and scroll through them infinitely like an escalator. Here's my loop to output the tweets: foreach ($tweets["results"] as $result) { $str = preg_replace('/[^\00-\255]+/u', '', $result["text"]); echo '<ul class="slides">'; echo '<li><a href="http://twitter.com/' . $result["from_user"] . '"><img src=' . $result["profile_image_url"] . '></a>' . $str . '</li><br /><br />'; echo '</ul>'; } My jQuery looks like this as of right now as I'm trying to play around with things: $(window).load(function() { $('.flexslider').flexslider({ slideDirection: "vertical", start: function(slider) { //$('.flexslider .slides > li gt(10)').hide(); }, after: function(slider) { // current.sl } }); }); Non-Working demo here - http://macklabmedia.com/tweet/

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >