Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 57/66 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to iteratively generate k elements subsets from a set of size n in java?

    - by Bea Metitiri
    Hi, I'm working on a puzzle that involves analyzing all size k subsets and figuring out which one is optimal. I wrote a solution that works when the number of subsets is small, but it runs out of memory for larger problems. Now I'm trying to translate an iterative function written in python to java so that I can analyze each subset as it's created and get only the value that represents how optimized it is and not the entire set so that I won't run out of memory. Here is what I have so far and it doesn't seem to finish even for very small problems: public static LinkedList<LinkedList<Integer>> getSets(int k, LinkedList<Integer> set) { int N = set.size(); int maxsets = nCr(N, k); LinkedList<LinkedList<Integer>> toRet = new LinkedList<LinkedList<Integer>>(); int remains, thresh; LinkedList<Integer> newset; for (int i=0; i<maxsets; i++) { remains = k; newset = new LinkedList<Integer>(); for (int val=1; val<=N; val++) { if (remains==0) break; thresh = nCr(N-val, remains-1); if (i < thresh) { newset.add(set.get(val-1)); remains --; } else { i -= thresh; } } toRet.add(newset); } return toRet; } Can anybody help me debug this function or suggest another algorithm for iteratively generating size k subsets? EDIT: I finally got this function working, I had to create a new variable that was the same as i to do the i and thresh comparison because python handles for loop indexes differently.

    Read the article

  • "Find all tiles connected to this one" project

    - by Omega
    Remember MS Paint? The bucket tool? If you used it and clicked on a pixel, all pixels connected to this pixel that are the same are affected. The theory is, I suppose, to check if there is any pixel adjacent to the selected one. If such pixel is the same type as the selected one, check for more adjacent pixels in this one, and so on. I want to implement something similar in VB.NET. Basically I have a 2D array map which represents the map. Let's assume there are only two types of tile: 0 and 1. Now, I got pretty much everything ready: I got my 2d map and I can tell which tile is clicked and tell what array indexes are the ones that represent such tile. Now for the "painting" process. Whenever I think about it, I can't figure a convenient way to execute such iteration. Can someone help me choosing a correct design/way/tip to achieve this?

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • Return value of a JQuery autocomplete using an array of objects as its source

    - by user2920430
    In a JQuery autocomplete which uses an array of objects as its source, can I display the label in the INPUT and later access the value? The default behavior is that the value is displayed in the INPUT after selection. In this case the values represent indexes to unique keys in rows in a table. <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>autocomplete demo</title> <link rel="stylesheet" href="http://code.jquery.com/ui/1.10.3/themes/smoothness/jquery-ui.css"> <script src="http://code.jquery.com/jquery-1.9.1.js"></script> <script src="http://code.jquery.com/ui/1.10.3/jquery-ui.js"></script> </head> <body> <label for="autocomplete">Select a programming language: </label> <input id="autocomplete"> <script> $( "#autocomplete" ).autocomplete({ source: [ { label:"c++", value:1 }, { label: "java", value:2 }, { label: "javascript", value:3 } ] }); </script> </body> </html>

    Read the article

  • Iterate over defined elements of a JS array

    - by sibidiba
    I'm using a JS array to Map IDs to actual elements, i.e. a key-value store. I would like to iterate over all elements. I tried several methods, but all have its caveats: for (var item in map) {...} Does iterates over all properties of the array, therefore it will include also functions and extensions to Array.prototype. For example someone dropping in the Prototype library in the future will brake existing code. var length = map.lenth; for (var i = 0; i < length; i++) { var item = map[i]; ... } does work but just like $.each(map, function(index, item) {...}); They iterate over the whole range of indexes 0..max(id) which has horrible drawbacks: var x = []; x[1]=1; x[10]=10; $.each(x, function(i,v) {console.log(i+": "+v);}); 0: undefined 1: 1 2: undefined 3: undefined 4: undefined 5: undefined 6: undefined 7: undefined 8: undefined 9: undefined 10: 10 Of course my IDs wont resemble a continuous sequence either. Moreover there can be huge gaps between them so skipping undefined in the latter case is unacceptable for performance reasons. How is it possible to safely iterate over only the defined elements of an array (in a way that works in all browsers and IE)?

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • Why would this query cause a Merge Cartesian Join in Oracle

    - by decompiled
    I have a query that was recently required to be modified. Here's the original SELECT RTRIM (position) AS "POSITION", . // Other fields . . FROM schema.table x WHERE hours > 0 AND pay = 'RGW' AND NOT EXISTS( SELECT position FROM schema.table2 y where y.position = x.position ) Here's the new version SELECT RTRIM (position) AS "POSITION", . // Other fields . . FROM schema.table x WHERE hours > 0 AND pay = 'RGW' AND NOT EXISTS( SELECT position FROM schema.table2 y where y.date = get_fiscal_year_start_date (SYSDATE) AND y.position = x.position ) The UDF get_fiscal_year_start_date() returns the fiscal year start date of the date parameter. The first query runs fine, but the second creates a merge Cartesian join. I looked at the indexes on the tables and found that position and date were both indexed. My question for you stackoverflow is why would the addition of 'y.date = get_fiscal_year_start_date (SYSDATE)' cause a merge cartesian join in Oracle 10g.

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • Update table instantly or “Bulk” Update in database later? And is it advisable?

    - by Mestika
    Hi, I have a question regarding a semi-constant update in a database. In short it is regarding a checkout function on a web page, which each time the checkout function is evoked it do five steps. I want to try to optimize this function and have my eye on a step where I update a table each time the checkout is performed. I take the information retrieved from the shopping cart and then update the table in question. I do have some indexes on the table, the gain from those are greater than leaving them so this is a cost I’m willing to take. Now, my question is. Could it in some way regarding to performance be better to not update the table instantly but collect every checkout items and save them in some way (maybe in a file) and then at a specific time (or several times) at day take this file and then update the table with the new information. Then I started thinking about if there was a possibility to use some sort of Bulk Update to take a file, hashmap, array (or?) and then update it. And I’m using IBM DB2 version 9.7 Mestika

    Read the article

  • Strange mod_rewrite problem; Website works partially

    - by Camran
    I have Ubuntu 9.10 Server... I need to get mod_rewrite working... the mod_rewrite module IS LOADED. On my server, the httpd.conf is empty, instead everything (almost) is in a file called apache2.conf. Anyways, I have also read I have to change the AllowOverride None to AllowOverride All in some file... My httpd.conf is empty as you know, but I have a folder called sites-enabled which contains a 000-default file. This is where I have set: AllowOverride All Now my goal as I stated in the last Q is to turn this link: http://mydomain.com/ad.php?ad_id=Bmw_nice_M3_497379462 into this: http://mydomain.com/Bmw_nice_M3_497379462 So as I got an answer in the last Q i inserted this into the htaccess file: Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteCond %{REQUEST_URI} !^/ad\.php RewriteRule ^(.*)$ ad.php?ad_id=$1 [L] Now, this works (no fully) when entering the url manually in the adress bar, but my website isn't working now for some reason. It is like the website is locked down or something, and unless I change AllowOverride to None it will act like that. Any ideas why? Also another note, the links inside the rewritten url doesn't work properly (images are not shown, while some are shown)...

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • Get parameter values from method at run time

    - by Landin Martens
    I have the current method example: public void MethodName(string param1,int param2) { object[] obj = new object[] { (object) param1, (object) param2 }; //Code to that uses this array to invoke dynamic methods } Is there a dynamic way (I am guessing using reflection) that will get the current executing method parameter values and place them in a object array? I have read that you can get parameter information using MethodBase and MethodInfo but those only have information about the parameter and not the value it self which is what I need. So for example if I pass "test" and 1 as method parameters without coding for the specific parameters can I get a object array with two indexes { "test", 1 }? I would really like to not have to use a third party API, but if it has source code for that API then I will accept that as an answer as long as its not a huge API and there is no simple way to do it without this API. I am sure there must be a way, maybe using the stack, who knows. You guys are the experts and that is why I come here. Thank you in advance, I can't wait to see how this is done. EDIT It may not be clear so here some extra information. This code example is just that, an example to show what I want. It would be to bloated and big to show the actual code where it is needed but the question is how to get the array without manually creating one. I need to some how get the values and place them in a array without coding the specific parameters.

    Read the article

  • Z-index vs Accessibility

    - by MetalAdam
    Here's a simplification of my code that I'm having problems with, in regards to layering. <ul id="main_menu"> <li>Option 1 <ul id="submenu1"> <li>link</li> <li>link</li> <li>link</li> </ul> </li> <li>Option 2 <ul id="submenu2"> <li>link</li> <li>link</li> <li>link</li> </ul> </li> </ul> My issue is that submenu2 seems to be above Option 1. I have tried to give them appropriate z-indexes, but they don't seem to work... I'm assuming because submenu2 is a child of Option 2, and has no relevance to Option 1. Any idea of any work around that would help resolve my issue? I'm using large graphics for most of these links, so the overlapping is quite obvious.

    Read the article

  • Hibernate Relationship Mapping/Speed up batch inserts

    - by manyxcxi
    I have 5 MySQL InnoDB tables: Test,InputInvoice,InputLine,OutputInvoice,OutputLine and each is mapped and functioning in Hibernate. I have played with using StatelessSession/Session, and JDBC batch size. I have removed any generator classes to let MySQL handle the id generation- but it is still performing quite slow. Each of those tables is represented in a java class, and mapped in hibernate accordingly. Currently when it comes time to write the data out, I loop through the objects and do a session.save(Object) or session.insert(Object) if I'm using StatelessSession. I also do a flush and clear (when using Session) when my line count reaches the max jdbc batch size (50). Would it be faster if I had these in a 'parent' class that held the objects and did a session.save(master) instead of each one? If I had them in a master/container class, how would I map that in hibernate to reflect the relationship? The container class wouldn't actually be a table of it's own, but a relationship all based on two indexes run_id (int) and line (int). Another direction would be: How do I get Hibernate to do a multi-row insert?

    Read the article

  • Create an index only on certain rows in mysql

    - by dhruvbird
    So, I have this funny requirement of creating an index on a table only on a certain set of rows. This is what my table looks like: USER: userid, friendid, created, blah0, blah1, ..., blahN Now, I'd like to create an index on: (userid, friendid, created) but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index. Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice. How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index? I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table). I am basically looking for something like this: ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid

    Read the article

  • CodeIgniter subfolders and URI routing

    - by shummel7845
    I’ve read the manual on URI routing and views and something is not clicking with me. In my views folder, I have a subfolder called products. In there is a file called product_view. In my controller, I have: function index() { $data['title'] = 'Product Overview'; $data['main_content'] = 'products/product_view'; $this->load->view('templates/main.php', $data); } The template loads a header view, a footer view and a navigation view, plus the view as a main content variable. In my URI routing, I have: $route['products/product-overview'] = 'products/product_view']; This causes a 404 error when I try to go to domain.com/products/product-overview. Do I need to do something with my .htaccess? If so, what? Here is my .htaccess: Options +FollowSymLinks Options -Indexes DirectoryIndex index.php RewriteEngine on RewriteCond $1 !^(index\.php|resources|images|css|js|robots\.txt|favicon\.ico) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L,QSA] I’d appreciate some specific help, as the documentation isn’t specific on how to address this. I’ve done a little searching in the forums, and didn’t see anything, but I’m posting this while I keep looking.

    Read the article

  • How to create a datastore.Text object out of an array of dynamically created Strings?

    - by Adrogans
    I am creating a Google App Engine server for a project where I receive a large quantity of data via an HTTP POST request. The data is separated into lines, with 200 characters per line. The number of lines can go into the hundreds, so 10's of thousands of characters total. What I want to do is concatenate all of those lines into a single Text object, since Strings have a maximum length of 500 characters but the Text object can be as large as 1MB. Here is what I thought of so far: public void doPost(HttpServletRequest req, HttpServletResponse resp) { ... String[] audioSampleData = new String[numberOfLines]; for (int i = 0; i < numberOfLines; i++) { audioSampleData[i] = req.getReader().readLine(); } com.google.appengine.api.datastore.Text textAudioSampleData = new Text(audioSampleData[0] + audioSampleData[1] + ...); ... } But as you can see, I don't know how to do this without knowing the number of lines before-hand. Is there a way for me to iterate through the String indexes within the Text constructor? I can't seem to find anything on that. Of note is that the Text object can't be modified after being created, and it must have a String as parameter for the constructor. (Documentation here) Is there any way to this? I need all of the data in the String array in one Text object. Many Thanks!

    Read the article

  • Help optimizing a query with 16 subqueries

    - by Webnet
    I have indexes/primaries on all appropriate ID fields for each type. I'm wondering though how I could make this more efficient. It takes a while to load the page with only 15,000 rows and that'll quickly grow to 500k. The $whereSql variable simply has a few more parameters for the main ebay_archive_listing table. NOTE: This is all done in a single query because I have ASC/DESC sorting for each subquery value. NOTE: I've converted some of the sub queries to INNER JOIN's SELECT product_master.product_id, ( SELECT COUNT(listing_id) FROM ebay_archive_product_listing_assoc '.$listingCountJoin.' WHERE ebay_archive_product_listing_assoc.product_id = product_master.product_id) as listing_count, sku, type_id, ( SELECT AVG(ebay_archive_listing.current_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as average_bid_price, ( SELECT AVG(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.buy_it_now_price > 0 ) as average_buyout_price, ( SELECT MIN(ebay_archive_listing.current_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as lowest_bid_price, ( SELECT MAX(ebay_archive_listing.current_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as highest_bid_price, ( SELECT MIN(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as lowest_buyout_price, ( SELECT MAX(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as highest_buyout_price, round((( SELECT COUNT(ebay_archive_listing.id) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.status_id = 2 ) / ( SELECT COUNT(listing_id) FROM ebay_archive_product_listing_assoc '.$listingCountJoin.' WHERE ebay_archive_product_listing_assoc.product_id = product_master.product_id ) * 100), 1) as sold_percent FROM product_master '.$joinSql.' WHERE product_master.product_id IN ( SELECT product_id FROM ebay_archive_product_listing_assoc INNER JOIN ebay_archive_listing ON ( ebay_archive_listing.id = ebay_archive_product_listing_assoc.listing_id AND '.$whereSql.' ) )

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Why does this while terminate before receiving a value? (java)

    - by David
    Here's the relevant code snippet. public static Territory[] assignTerri (Territory[] board, String[] colors) { for (int i = 0; i<board.length; i++) { // so a problem is that Territory.translate is void fix this. System.out.print ("What team controls ") ; Territory.translate (i) ; System.out.println (" ?") ; boolean a = false ; while (a = false) { String s = getIns () ; if ((checkColor (s, colors))) { board[i].team = (returnIndex (s, colors)) ; a =true ; } else System.out.println ("error try again") ; } System.out.print ("How many unites are on ") ; Territory.translate (i) ; System.out.println (" ?") ; int n = getInt () ; board[i].population = n ; } return board ; } As an additional piece of information, checkColor just checks to make sure that its first argument, a string, is a string in one of the indexes of its second argument, an array. It seems to me that when the while the method gets a string from the keyboard and then only if that string checks out is a true and the while allowed to terminate. The output I get though is this: What team controls Alaska ? How many unites are on Alaska ? (there is space at the end to type in an input) This would seem to suggest that the while terminates before an input is ever typed in since the first line of text is within the while while the second line of text comes after it outside of it. Why is this happening?

    Read the article

  • Why does this while terminate before recieving a value? (java)

    - by David
    here's the relevant code snippet. public static Territory[] assignTerri (Territory[] board, String[] colors) { for (int i = 0; i<board.length; i++) { // so a problem is that Territory.translate is void fix this. System.out.print ("What team controls ") ; Territory.translate (i) ; System.out.println (" ?") ; boolean a = false ; while (a = false) { String s = getIns () ; if ((checkColor (s, colors))) { board[i].team = (returnIndex (s, colors)) ; a =true ; } else System.out.println ("error try again") ; } System.out.print ("How many unites are on ") ; Territory.translate (i) ; System.out.println (" ?") ; int n = getInt () ; board[i].population = n ; } return board ; } as an additional piece of information, checkColor just checks to make sure that its first argument, a string, is a string in one of the indexes of its second argument, an array. it seems to me that when the while the method gets a string from the keyboard and then only if that string checks out is a true and the while allowed to terminate. The output i get though is this: What team controls Alaska ? How many unites are on Alaska ? (there is space at the end to type in an input) This would seem to suggest that the while terminates before an input is ever typed in since the first line of text is within the while while the second line of text comes after it outside of it. why is this happening?

    Read the article

  • jquery plugin: creation

    - by user1542535
    The output am expecting is an unordered list which am creating with jquery...which takes in put from a json file (which works fine when i dont create it as a plugin). Am very new with the concept of building a plugin. i've tried to create one which doesnt output my unordered list json file structure { "Categories": [ { "cat_id":"1", "name":"Main Menu1", "sub_categories":[ { "cat_id":"10", "name":" Sub Menu11", "sub_level_one_link":"http:\/\/one.com" }, my js file //create plugin jQuery.fn.emrMenu= function (options) { myoptions = jQuery.extend ({ url: "error" }, options); if (myoptions.url=="error") { alert("Error:No data recieved"); return false; } $(this).html (myoptions.url); return this.each (function () { //alert(myoptions.url+this.id); $.getJSON(myoptions.url, function(data) { $.each(data.Categories, function(i, category) { alert("test1"); //get all sub menu items in list indexes var submenudata=''; $.each(category.sub_categories, function(i, sub_categories) { submenudata += "<li><a href='"+sub_categories.sub_level_one_link+"' <span>"+sub_categories.name+"</span></a></li>"; }); var menudata ="<li id='"+category.cat_id+"' class='has-sub '><a href='#'><span>"+category.name+"</span></a><ul>"+submenudata+"</ul></li>"; //stringify unordered list and bind to div var menu="<ul>"+menudata+"</ul>"; // $(menu).appendTo("#"this.id); }); }); //alert (this.id); }); } and am calling the plugin: <script> $(document).ready(function() { $('#menu_n').emrMenu ({ url: "menu_data.json"}); }); </script> I'am pretty confused at this point any help is greatly appreciated cheers!

    Read the article

  • How to set up Mod_WSGI for Python on Ubuntu

    - by AutomatedTester
    Hi, I am trying to setup MOD_WSGI on my Ubuntu box. I have found steps that said I needed to do the following steps I found at http://ubuntuforums.org/showthread.php?t=833766 sudo apt-get install libapache2-mod-wsgi sudo a2enmod mod-wsgi sudo /etc/init.d/apache2 restart sudo gedit /etc/apache2/sites-available/default and update the Directory <Directory /var/www/> Options Indexes FollowSymLinks MultiViews ExecCGI AddHandler cgi-script .cgi AddHandler wsgi-script .wsgi AllowOverride None Order allow,deny allow from all </Directory> sudo /etc/init.d/apache2 restart Created test.wsgi with def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Step 2 fails because it says it can't find mod-wsgi even though the apt-get found it. If I carry on with the steps the python app just shows as plain text in a browser. Any ideas what I have done wrong? EDIT: Results for questions asked automatedtester@ubuntu:~$ dpkg -l libapache2-mod-wsgi Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-======================================-======================================-============================================================================================ ii libapache2-mod-wsgi 2.5-1 Python WSGI adapter module for Apache automatedtester@ubuntu:~$ dpkg -s libapache2-mod-wsgi Package: libapache2-mod-wsgi Status: install ok installed Priority: optional Section: python Installed-Size: 376 Maintainer: Ubuntu MOTU Developers <[email protected]> Architecture: i386 Source: mod-wsgi Version: 2.5-1 Depends: apache2, apache2.2-common, libc6 (>= 2.4), libpython2.6 (>= 2.6), python (>= 2.5), python (<< 2.7) Suggests: apache2-mpm-worker | apache2-mpm-event Conffiles: /etc/apache2/mods-available/wsgi.load 06d2b4d2c95b28720f324bd650b7cbd6 /etc/apache2/mods-available/wsgi.conf 408487581dfe024e8475d2fbf993a15c Description: Python WSGI adapter module for Apache The mod_wsgi adapter is an Apache module that provides a WSGI (Web Server Gateway Interface, a standard interface between web server software and web applications written in Python) compliant interface for hosting Python based web applications within Apache. The adapter provides significantly better performance than using existing WSGI adapters for mod_python or CGI. Original-Maintainer: Debian Python Modules Team <[email protected]> Homepage: http://www.modwsgi.org/ automatedtester@ubuntu:~$ sudo a2enmod libapache2-mod-wsgi ERROR: Module libapache2-mod-wsgi does not exist! automatedtester@ubuntu:~$ sudo a2enmod mod-wsgi ERROR: Module mod-wsgi does not exist! FURTHER EDIT FOR RMYates automatedtester@ubuntu:~$ apache2ctl -t -D DUMP_MODULES apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_worker_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgid_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) python_module (shared) setenvif_module (shared) status_module (shared) Syntax OK automatedtester@ubuntu:~$

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • Apache2 name based virtual host always redirect 301

    - by Francesco
    I've got a server (runnging Debian Squeeze) with Apache 2.2, there are 4 site running there. I'm using namebased virtulhosts because I've got a single IP. Initial configuration has been made with Webmin and probably something has been messed up.. firstdomain.com is my default domain and is working correctly, seconddomain.com is another site that is working. Now I want to add lastdomain.tk as a new site, so I've made this config file: root@webamp:/etc/apache2# cat sites-available/lastdomain.tk.conf <VirtualHost *:80> DocumentRoot /home/server/Condivisione/RAID/lastdomain.tk ServerName www.alazanes.tk ServerAlias alazanes.tk </VirtualHost> I've added it to enabled-sites and restarted apache. The problem is that if I go to lastdomain.tk (or www.lastdomain.tk) I'm redirected to firstdomain.com with a 301 redirect. Both lastdomain.tk and www.lastdomain.tk are A DNS records pointing to my IP address. Strange thing is that if a change DocumentRoot of lastdomain.tk to DocumentRoot /home/server/Condivisione/RAID/Sito_SecondDomain I correctly see seconddomain.com content without being redirected (lastdomain.tk is showed on address bar) These are the other configurations I'm using. root@webamp:/root# source /etc/apache2/envvars ; /usr/sbin/apache2 -S VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 webamp.firstdomain.com (/etc/apache2/sites-enabled/ssl.bbteam:1) *:80 is a NameVirtualHost default server firstdomain.com (/etc/apache2/sites-enabled/000-default:7) port 80 namevhost firstdomain.com (/etc/apache2/sites-enabled/000-default:7) port 80 namevhost www.lastdomain.tk (/etc/apache2/sites-enabled/lastdomain.tk.conf:1) ## other domains ## port 80 namevhost seconddomain.com (/etc/apache2/sites-enabled/seconddomain.com.conf:1) Syntax OK Content of default config file is root@webamp:/etc/apache2# cat sites-available/default <VirtualHost *:80> ServerAdmin [email protected] ServerName firstdomain.com ServerAlias www.firstdomain.com direct.firstdomain.com DocumentRoot /home/server/Condivisione/RAID/Sito_Web_Apache_su_80 ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> content of second domain config file is root@webamp:/etc/apache2# cat sites-available/seconddomain.com.conf <VirtualHost *:80> DocumentRoot /home/server/Condivisione/RAID/Sito_SecondDomain ServerName seconddomain.com ServerAlias www.seconddomain.com direct.seconddomain.com #redirect 301 / http://www.seconddomain.com/ <Directory "/home/server/Condivisione/RAID/Sito_SecondDomain"> allow from all Options +Indexes </Directory> </VirtualHost> Probably a file permission problem? root@webamp:/root# ls -lh /home/server/Condivisione/RAID/ total 7.1M drwxrwxr-x 15 www-data server 4.0K Jun 5 13:29 Sito_SecondDomain drwxrwxrwx 23 server server 4.0K Jun 7 16:22 Sito_Web_Apache_su_80 drwxrwxr-x 17 www-data server 4.0K Jun 8 09:56 alazanes.tk Do someone have an idea of what is happening? Thanks, Francesco

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >