Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 431/462 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Strange thread behavior in Perl

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • [jQuery] Sort contents alphabetically

    - by James
    So I am appending the following after an AJAX call, and this AJAX call may happen several time, returning several data items. And I am trying to use Tinysort [http://plugins.jquery.com/project/TinySort] to sort the list everytime, so the new items added are integrated nicely and sorted alphabetically. It unfortunately doesn't seem to be working. Any ideas? I mean, the data itself is being correctly appended, but unfortunately the sorting isn't occurring. var artists = []; $.each(data.artists, function(k, v) { artists.push('<section id="artist:' + v.name + '" class="artist"><div class="span-9"><img alt="' + v.name + '" width="34" height="34" class="photo" src="' + v.photo + '" /><strong>' + v.name + '</strong><br/><span>' + v.events + ' upcoming gig'); if (v.events != 1) { artists.push('s'); } artists.push('</span></div><div class="span-2 align-right last">Last</div><div class="clear"></div></section>'); }); $('div.artists p').remove(); $('div.artists div.next').remove(); $('div.artists').append(artists.join('')).append('<div class="next"><a href="#">Next</a></div>'); $('div.artists section').tsort('section[id]', {orderby: 'id'}); Thanks!

    Read the article

  • Mulitple variable containing "for loop" is not working in my code...

    - by OM The Eternity
    Below is the code I am working with and the last for loop i am iterating is not working... I think i am doing something wrong using multiple variable in a for loop,also I m aware of the fact that it can be done. $updaterbk = "SELECT j1. * FROM jos_audittrail j1 LEFT OUTER JOIN jos_audittrail j2 ON ( j1.trackid != j2.trackid AND j1.field != j2.field AND j1.changedone < j2.changedone ) WHERE j1.operation = 'UPDATE' AND j2.id IS NULL "; $selectupdrbk = mysql_query($updaterbk); while($row1 = mysql_fetch_array($selectupdrbk)) { $updrbk[] = $row1; } foreach($updrbk as $upfield) { if(!in_array($upfield['field'],$exist)) { $exist[] = $upfield['field']; $existval[] = $upfield['oldvalue']; //echo $updqueryrbk = "UPDATE `".$upfield['table_name']."` SET "; } } print_r($existval); for($i=0;$j=0;$i<count($exist);$j<count($existval);$i++;$j++) { $updqueryrbk.= $exist[$i]['field']."=".$existval[$j]['oldvalue'].","; $updqueryrbk.="="; $updqueryrbk.= $existval[$j]['oldvalue']; $updqueryrbk.=","; }

    Read the article

  • How can I read a continuously updating log file in Perl?

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format: Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated. What I have tried to read the output and make the header is: #!/usr/bin/perl -w use strict; use File::Tail; my $head=["Time"]; my $pos={}; my $last_pos=0; my $current_event=[]; my $events=[]; my $file = shift; $file = File::Tail->new($file); while(defined($_=$file->read)) { next if $_ =~ some filters; my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_; my $key = $interface.".".$eve; if (not defined $pos->{$eve_key}) { $last_pos+=1; $pos->{$eve_key}=$last_pos; push @$head,$eve; } print join(",", @$head) . "\n"; } Is there any way to do this using Perl?

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • How to end a thread in perl

    - by user1672190
    I am new to perl and i have a question about perl thread. I am trying to create a new thread to check if the running function is timed out, and my way of doing it is as below. Logic is 1.create a new thread 2.run the main function and see if it is timed out, if ture, kill it Sample code: $exit_tread = false; # a flag to make sure timeout thread will run my $thr_timeout = threads->new( \&timeout ); execute main function here; $exit_thread = true # set the flag to true to force thread ends $thr_timeout->join(); #wait for the timeout thread ends Code of timeout function sub timeout { $timeout = false; my $start_time = time(); while (!$exit_thread) { sleep(1); last if (main function is executed); if (time() - $start_time >= configured time ) { logmsg "process is killed as request timed out"; _kill_remote_process(); $timeout = true; last; } } } now the code is running as i expected, but i am just not very clear if the code $exit_thread = true works because there is a "last" at the end of while loop. Can anybody give me a answer? Thanks

    Read the article

  • Replace beginning words(SQL SERVER 2005, SET BASED)

    - by Newbie
    I have the below tables. tblInput Id WordPosition Words -- ----------- ----- 1 1 Hi 1 2 How 1 3 are 1 4 you 2 1 Ok 2 2 This 2 3 is 2 4 me tblReplacement Id ReplacementWords --- ---------------- 1 Hi 2 are 3 Ok 4 This The tblInput holds the list of words while the tblReplacement hold the words that we need to search in the tblInput and if a match is found then we need to replace those. But the problem is that, we need to replace those words if any match is found at the beginning. i.e. in the tblInput, in case of ID 1, the words that will be replaced is only 'Hi' and not 'are' since before 'are', 'How' is there and it is not in the tblReplacement list. in case of Id 2, the words that will be replaced are 'Ok' & 'This'. Since these both words are present in the tblReplacement table and after the first word i.e. 'Ok' is replaced, the second word which is 'This' here comes first in the list of ID category 2 . Since it is available in the tblReplacement, and is the first word now, so this will also be replaced. So the desired output will be Id NewWordsAfterReplacement --- ------------------------ 1 How 1 are 1 you 2 is 2 me My approach so far: ;With Cte1 As( Select t1.Id ,t1.Words ,t2.ReplacementWords From tblInput t1 Cross Join tblReplacement t2) ,Cte2 As( Select Id, NewWordsAfterReplacement = REPLACE(Words,ReplacementWords,'') From Cte1) Select * from Cte2 where NewWordsAfterReplacement <> '' But I am not getting the desired output. It is replacing all the matching words. Urgent help needed*.( SET BASED )* I am using SQL SERVER 2005. Thanks

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Stored procedure or function expects parameter which is not supplied

    - by user2920046
    I am trying to insert data into a SQL Server database by calling a stored procedure, but I am getting the error Procedure or function 'SHOWuser' expects parameter '@userID', which was not supplied. My stored procedure is called "SHOWuser". I have checked it thoroughly and no parameters is missing. My code is: public void SHOWuser(string userName, string password, string emailAddress, List preferences) { SqlConnection dbcon = new SqlConnection(conn); try { SqlCommand cmd = new SqlCommand(); cmd.Connection = dbcon; cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "SHOWuser"; cmd.Parameters.AddWithValue("@userName", userName); cmd.Parameters.AddWithValue("@password", password); cmd.Parameters.AddWithValue("@emailAddress", emailAddress); dbcon.Open(); int i = Convert.ToInt32(cmd.ExecuteScalar()); cmd.Parameters.Clear(); cmd.CommandText = "tbl_pref"; foreach (int preference in preferences) { cmd.Parameters.Clear(); cmd.Parameters.AddWithValue("@userID", Convert.ToInt32(i)); cmd.Parameters.AddWithValue("@preferenceID", Convert.ToInt32(preference)); cmd.ExecuteNonQuery(); } } catch (Exception) { throw; } finally { dbcon.Close(); } and the stored procedure is: ALTER PROCEDURE [dbo].[SHOWuser] -- Add the parameters for the stored procedure here ( @userName varchar(50), @password nvarchar(50), @emailAddress nvarchar(50) ) AS BEGIN INSERT INTO tbl_user(userName,password,emailAddress) values(@userName,@password,@emailAddress) select tbl_user.userID,tbl_user.userName,tbl_user.password,tbl_user.emailAddress, stuff((select ',' + preferenceName from tbl_pref_master inner join tbl_preferences on tbl_pref_master.preferenceID = tbl_preferences.preferenceID where tbl_preferences.userID=tbl_user.userID FOR XML PATH ('')),1,1,' ' ) AS Preferences from tbl_user SELECT SCOPE_IDENTITY(); END Pls help, Thankx in advance...

    Read the article

  • php while loop throwing a error over a $var < 10 statement

    - by William
    Okay, so for some reason this is giving me a error as seen here: http://prime.programming-designs.com/test_forum/viewboard.php?board=0 However if I take away the '&& $cur_row < 10' it works fine. Why is the '&& $cur_row < 10' causing me a problem? $sql_result = mysql_query("SELECT post, name, trip, Thread FROM (SELECT MIN(ID) AS min_id, MAX(ID) AS max_id, MAX(Date) AS max_date FROM test_posts GROUP BY Thread ) t_min_max INNER JOIN test_posts ON test_posts.ID = t_min_max.min_id WHERE Board=".$board." ORDER BY max_date DESC", $db); $num_rows = mysql_num_rows($sql_result); $cur_row = 0; while($row = mysql_fetch_row($sql_result) && $cur_row < 10) { $sql_max_post_query = mysql_query("SELECT ID FROM test_posts WHERE Thread=".$row[3].""); $post_num = mysql_num_rows($sql_max_post_query); $post_num--; $cur_row++; echo''.$cur_row.'<br/>'; echo'<div class="postbox"><h4>'.$row[1].'['.$row[2].']</h4><hr />' .$row[0]. '<br /><hr />[<a href="http://prime.programming-designs.com/test_forum/viewthread.php?thread='.$row[3].'">Reply</a>] '.$post_num.' posts omitted.</div>'; }

    Read the article

  • Combining two exe files

    - by Sophia
    Here's some background to my problem: I have a project in Visual C++ 2006 and a project in in Visual C++ 2010 Express. Both compiles to form an exe file each. I cannot convert my 2006 project to 2010 because I get a lot of "unable to load project" errors. I also cannot port my 2010 project code to 2006 (I always get errors no matter what I try, something to do with libraries). My final solution requires me to only have ONE executable. Is there anything I can do to achieve that? I've done some quick search on Google and found there to be exe joiners, but I've also heard that those things are often used to make malware. For reference, I am working with "dummy" clients, and therefore want to simplify things on their end as much as possible. Thus, having them executing one exe is better than having them execute two. Also, I do not wish for their antivirus to go haywire because I used some program to join two exe together. What do? Edit: The two project files do different things. For example, the project in VS2006 one sets up a server, and the project in VS2010 one grabs info on the user's OS. The code for the "server", I think, has a lot of dependencies and for some reason cannot convert to Visual c++ 2010. The code for "grabbing" requires some newer libraries and compiling options, and would not work if I port to 2006.

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • Visutal Studio Warning "Content is not allowed" in ASP.NET project

    - by pstar
    Hi, I am just started working as a programmer last month, so there will be plenty of newbie question come from me, stay tuned... I am now working on modify the provided template (from DevExpress) to create new web form using ASP.NET 2.0 on Visual Studio 2008. While the functionality of that web form is there, I am in the process of get rid of ninety something warning message, most of them come from the provided template. One of them puzzled me for a while is this one: "Warning 75 Content is not allowed between the opening and closing tags for element 'ClientSideEvents'." And here is the code: <dxe:ASPxListBox id="edtMultiResource" runat="server" width="100%" SelectionMode="CheckColumn" DataSource='<%# ResourceDataSource %>' Border-BorderWidth="0"> <ClientSideEvents SelectedIndexChanged="function(s, e) { var resourceNames = new Array(); var items = s.GetSelectedItems(); var count = items.length; if (count > 0) { for(var i=0; i<count; i++) _aspxArrayPush(resourceNames, items[i].text); } else _aspxArrayPush(resourceNames, ddResource.cp_Caption_ResourceNone); ddResource.SetValue(resourceNames.join(', ')); }"></ClientSideEvents> </dxe:ASPxListBox> I couldn't see anything wrong with the code myself, so please help me out here.

    Read the article

  • Further filter SQL results

    - by eric
    I've got a query that returns a proper result set, using SQL 2005. It is as follows: select case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end as [Quarter], bugtypes.bugtypename, count(bug.bugid) as [Total] from bug left outer join bugtypes on bug.crntbugtypeid = bugtypes.bugtypeid and bug.projectid = bugtypes.projectid where (bug.projectid = 44 and bug.currentowner in (-1000000031,-1000000045) and bug.crntplatformid in (42,37,25,14)) or (bug.projectid = 44 and bug.currentowner in (select memberid from groupmembers where projectid = 44 and groupid in (87,88)) and bug.crntplatformid in (42,37,25,14)) group by case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end, bugtypes.bugtypename order by 1,3 desc It produces a nicely grouped list of years and quarters, an associated descriptor, and a count of incidents in descending count order. What I'd like to do is further filter this so it shows only the 10 most submitted incidents per quarter. What I'm struggling with is how to take this result set and achieve that.

    Read the article

  • Is it possible to have a mysql table accept a null value for a primary_key column referencing a diff

    - by Dr.Dredel
    I have a table that has a column which holds the id of a row in another table. However, when table A is being populated, table B may or may not have a row ready for table A. My question is, is it possible to have mysql prevent an invalid value from being entered but be ok with a NULL? or does a foreign key necessitate a valid related value? So... what I'm looking for (in pseudo code) is this: Table "person" id | name Table "people" id | group_name | person_id (foreign key id from table person) insert into person (1, 'joe'); insert into people (1, 'foo', 1)//kosher insert into people (1, 'foo', NULL)//also kosher insert into people(1, 'foo', 7)// should fail since there is no id 7 in the person table. The reason I need this is that I'm having a chicken and egg issue where it makes perfect sense for the rows in the people table to be created before hand (in this example, I'm creating the groups and would like them to pre-exist the people who join them). And I realize that THIS example is silly and I would just put the group id in the person table rather than vice-versa, but in my real-world problem that is not workable. Just curious if I need to allow any and all values in order to make this work, or if there's some way to allow for null.

    Read the article

  • Using before_create in Rails to normalize a many to many table

    - by weotch
    I am working on a pretty standard tagging implementation for a table of recipes. There is a many to many relationship between recipes and tags so the tags table will be normalized. Here are my models: class Recipe < ActiveRecord::Base has_many :tag_joins, :as => :parent has_many :tags, :through => :tag_joins end class TagJoin < ActiveRecord::Base belongs_to :parent, :polymorphic => true belongs_to :tag, :counter_cache => :usage_count end class Tag < ActiveRecord::Base has_many :tag_joins, :as => :parent has_many :recipes, :through => :tag_joins, :source => :parent , :source_type => 'Recipe' before_create :normalizeTable def normalizeTable t = Tag.find_by_name(self.name) if (t) j = TagJoin.new j.parent_type = self.tag_joins.parent_type j.parent_id = self.tag_joins.parent_id j.tag_id = t.id return false end end end The last bit, the before_create callback, is what I'm trying to get working. My goal is if there is an attempt to create a new tag with the same name as one already in the table, only a single row in the join table is produced, using the existing row in tags. Currently the code dies with: undefined method `parent_type' for #<Class:0x102f5ce38> Any suggestions?

    Read the article

  • Hibernate CreateSQL Query Problem

    - by Shaded
    Hello All I'm trying to use hibernates built in createsql function but it seems that it doesn't like the following query. List =hibernateSession.createSQLQuery("SELECT number, location FROM table WHERE other_number IN (SELECT f.number FROM table2 AS f JOIN table3 AS g on f.number = g.number WHERE g.other_number = " + var + ") ORDER BY number").addEntity(Table.class).list(); I have a feeling it's from the nested select statement, but I'm not sure. The inner select is used elsewhere in the code and it returns results fine. This is my mapping for the first table: <hibernate-mapping> <class name="org.efs.openreports.objects.Table" table="table"> <id name="id" column="other_number" type="java.lang.Integer"> <generator class="native"/> </id> <property name="number" column="number" not-null="true" unique="true"/> <property name="location" column="location" not-null="true" unique="true"/> </class> </hibernate-mapping> And the .java public class Table implements Serializable { private Integer id;//panel_facility private Integer number; private String location; public Table() { } public void setId(Integer id) { this.id = id; } public Integer getId() { return id; } public void setNumber(Integer number) { this.number = number; } public Integer number() { return number; } public String location() { return location; } public void setLocation(String location) { this.location = location; } } Any suggestions? Edit (Added mapping)

    Read the article

  • How to add an object to a html string?

    - by Philippe Maes
    I'm trying to load several images by a drop action and than resizing them and adding them as a thumbnail. The resize part is very important because the images can be very large and I want to make the thumbnails small of size. Here is my code: loadingGif(drop); for (var i=0;i<files.length;i++) { var file = files[i]; var reader = new FileReader(); reader.onload = function(e) { var src = e.target.result; var img = document.createElement('img'); img.src = src; var scale = 100/img.height; img.height = 100; img.width = scale*img.width; output.push('<div id="imagecontainer"><div id="image">'+img+'</div><div id="delimage"><img src="img/del.jpg"" /></div></div>'); if(output.length == files.length) { drop.removeChild(drop.lastChild); drop.innerHTML += output.join(''); output.length = 0; } } reader.readAsDataURL(file); } As you can probably tell I insert a loading gif image in my dropzone until all files are loaded (output.length == files.length). When a file is loaded I add html to an array which I will print if the load is complete. The problem is I can't seem to add the img object (I need this object to resize the image) to the html string, which seems obvious as the img is an object... So my question to you guys is: how do I do this? :)

    Read the article

  • Many tables for many users?

    - by Seagull
    I am new to web programming, so excuse the ignorance... ;-) I have a web application that in many ways can be considered to be a multi-tenant environment. By this I mean that each user of the application gets their own 'custom' environment, with absolutely no interaction between those users. So far I have built the web application as a 'single user' environment. In other words, I haven't actually done anything to support multi-users, but only worked on the functionality I want from the app. Here is my problem... What's the best way to build a multi-user environment: All users point to the same 'core' backend. In other words, I build the logic to separate users via appropriate SQL queries (eg. select * from table where user='123' and attribute='456'). Each user points to a unique tablespace, which is built separately as they join the system. In this case I would simply generate ALL the relevant SQL tables per user, with some sort of suffix for the user. (eg. now a query would look like 'select * from table_ where attribute ='456'). In short, it's a difference between "select * from table where USER=" and "select * from table_USER".

    Read the article

  • splice() not working on correctly

    - by adardesign
    I am setting a cookie for each navigation container that is clicked on. It sets an array that is joined and set the cookie value. if its clicked again then its removed from the array. It somehow buggy. It only splices after clicking on other elements. and then it behaves weird. It might be that splice is not the correct method Thanks much. var navLinkToOpen; var setNavCookie = function(value){ var isSet = false; var checkCookies = checkNavCookie() setCookieHelper = checkCookies? checkCookies.split(","): []; for(i in setCookieHelper){ if(value == setCookieHelper[i]){ setCookieHelper.splice(value,1); isSet = true; } } if(!isSet){setCookieHelper.push(value)} setCookieHelper.join(",") document.cookie = "navLinkToOpen"+"="+setCookieHelper; } var checkNavCookie = function(){ var allCookies = document.cookie.split( ';' ); for (i = 0; i < allCookies.length; i++ ){ temp = allCookies[i].split("=") if(temp[0].match("navLinkToOpen")){ var getValue = temp[1] } } return getValue || false } $(document).ready(function() { $("#LeftNav li").has("b").addClass("navHeader").not(":first").siblings("li").hide() $(".navHeader").click(function(){ $(this).toggleClass("collapsed").nextUntil("li:has('b')").slideToggle(300); setNavCookie($('.navHeader').index($(this))) return false }) var testCookies = checkNavCookie(); if(testCookies){ finalArrayValue = testCookies.split(",") for(i in finalArrayValue){ $(".navHeader").eq(finalArrayValue[i]).toggleClass("collapsed").nextUntil(".navHeader").slideToggle (0); } } });

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • Passing values for method

    - by Kasun
    I beginner for programming. So can you please show me how to pass values for your compile() method. class CL { private const string clexe = @"cl.exe"; private const string exe = "Test.exe", file = "test.cpp"; private string args; public CL(String[] args) { this.args = String.Join(" ", args); this.args += (args.Length > 0 ? " " : "") + "/Fe" + exe + " " + file; } public Boolean Compile(String content, ref string errors) { //remove any old copies if (File.Exists(exe)) File.Delete(exe); if (File.Exists(file)) File.Delete(file); File.WriteAllText(file, content); Process proc = new Process(); proc.StartInfo.UseShellExecute = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.FileName = clexe; proc.StartInfo.Arguments = this.args; proc.StartInfo.CreateNoWindow = true; proc.Start(); //errors += proc.StandardError.ReadToEnd(); errors += proc.StandardOutput.ReadToEnd(); proc.WaitForExit(); bool success = File.Exists(exe); return success; } }

    Read the article

  • Unknown reason for code executing the way it does in python

    - by Jasper
    Hi I am a beginner programmer, using python on Mac. I created a function as a part of a game which receives the player's input for the main character's name. The code is: import time def newGameStep2(): print ' ****************************************** ' print '\nStep2\t\t\t\tCharacter Name' print '\nChoose a name for your character. This cannot\n be changed during the game. Note that there\n are limitations upon the name.' print '\nLimitations:\n\tYou cannot use:\n\tCommander\n\tLieutenant\n\tMajor\n\t\tas these are reserved.\n All unusual capitalisations will be removed.\n There is a two-word-limit on names.' newStep2Choice = raw_input('>>>') newStep2Choice = newStep2Choice.lower() if 'commander' in newStep2Choice or 'lieutenant' in newStep2Choice or 'major' in newStep2Choice: print 'You cannot use the terms \'commander\', \'lieutenant\' or \'major\' in the name. They are reserved.\n' print time.sleep(2) newGameStep2() else: newStep2Choice = newStep2Choice.split(' ') newStep2Choice = [newStep2Choice[0].capitalize(), newStep2Choice[1].capitalize()] newStep2Choice = ' ' .join(newStep2Choice) return newStep2Choice myVar = newGameStep2() print myVar When I was testing, I inputted 'major a', and when it asked me to input another name, i inputted 'a b'. However, when it returned the output of the function, it returns 'major a'. I went through this with a debugger, yet I still can't seem to find where the problem occurred. Thanks for any help, Jasper

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • Building a calendar navigation in Rails (controller and view links)

    - by user532339
    Trying to get the next month when clicking the link_to. I've done the following in the view. <%= form_tag rota_days_path, :method => 'get' do %> <p> <%= hidden_field_tag(:next_month, @t1) %> <%= link_to 'Next Month', rota_days_path(:next_month => @next_month)%> </p> <% end %> class RotaDaysController < ApplicationController # GET /rota_days # GET /rota_days.json # load_and_authorize_resource respond_to :json, :html def index @rota_days = RotaDay.all @hospitals = Hospital.all @t1 = Date.today.at_beginning_of_month @t2 = Date.today.end_of_month @dates = (@t1..@t2) #Concat variable t1 + t2 together # @next_month = Date.today + 1.month(params[: ??? ] #Old if params[:next_month] # @next_month = Date.today >> 1 @next_month = params[:next_month] + 1.month @t1 = @next_month.at_beginning_of_month @t2 = @next_month.end_of_month @dates = (@t1..@t2) end @title = "Rota" respond_to do |format| format.html # index.html.erb format.json { render json: @rota_days } end end I have identified that the reason why this may not be working is in because of the following in my controller @next_month = params[:next_month] + 1.month the last two called methods is defined only on time/date objects. but not on fixnum/string objects. I understand I am missing something from this Update I have found that the actual issue is that the `params[:next_month] is a string and I am trying to add a date to to it. Which means I need to convert the string to a date/time object. Console output: Started GET "/rota_days" for 127.0.0.1 at 2012-12-14 22:14:36 +0000 Processing by RotaDaysController#index as HTML User Load (0.0ms) SELECT `users`.* FROM `users` WHERE `users`.`id` = 1 LIMIT 1 RotaDay Load (0.0ms) SELECT `rota_days`.* FROM `rota_days` Hospital Load (1.0ms) SELECT `hospitals`.* FROM `hospitals` Rendered rota_days/index.html.erb within layouts/application (23.0ms) Role Load (0.0ms) SELECT `roles`.* FROM `roles` INNER JOIN `roles_users` ON `roles`.`id` = `roles_users`.`role_id` WHERE `roles_users`.`user_id` = 1 AND `roles`.`name` = 'Administrator' LIMIT 1 Completed 200 OK in 42ms (Views: 39.0ms | ActiveRecord: 1.0ms)

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >