Search Results

Search found 2560 results on 103 pages for 'revision counter'.

Page 19/103 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Hbase schema design -- to make sorting easy?

    - by chen
    I have 1M words in my dictionary. Whenever a user issue a query on my website, I will see if the query contains the words in my dictionary and increment the counter corresponding to them individually. Here is the example, say if a user type in "Obama is a president" and "Obama" and "president" are in my dictionary, then I should increment the counter by 1 for "Obama" and "president". And from time to time, I want to see the top 100 words (most queried words). If I use Hbase to store the counter, what schema should I use? -- I have not come up an efficient one yet. If I use word in my dictionary as row key, and "counter" as column key, then updating counter(increment) is very efficient. But it's very hard to sort and return the top 100. Anyone can give a good advice? Thanks.

    Read the article

  • Torotoisesvn import

    - by user481913
    Hi, My developer gave me a compressed file for the whole project from his subversion working copy. I uncompressed that file and used torotoisesvn import to put it into my repository which i host with projectlocker(a svn hosting provider). Now what i want to know is : 1) I want to view the files and the code that has changed since last revision. After doing the import the latest revision is at 8. I want to see what files and code has changed between revision no. 8 and revision no. 7 and maybe able to compare all of it upto revision no. 1. My question is can i do this ? 2)How to view these changes in projectlocker( or if it's the similar process with every svn host provider ) ?

    Read the article

  • How can I print N array elements with delimiters per line?

    - by Mark B
    I have an array in Perl I want to print with space delimiters between each element, except every 10th element which should be newline delimited. There aren't any spaces in the elements if that matters. I've written a function to do it with for and a counter, but I wondered if there's a better/shorter/canonical Perl way, perhaps a special join syntax or similar. My function to illustrate: sub PrintArrayWithNewlines { my $counter = 0; my $newlineIndex = shift @_; foreach my $item (@_) { ++$counter; print "$item"; if($counter == $newlineIndex) { $counter = 0; print "\n"; } else { print " "; } } }

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • NSPredicate aggregate function [SIZE] gives 'unsupported function expression' error

    - by jinglesthula
    iOS 4: I have entities in Core Data (using SQLite, which is a requirement) of: Request Response (which has a property personId) Revision Relationships are: Request <-- Revision Request <-- Response Revision <-- Response (e.g. each request may have many responses; each request/response pair may have many revisions) I'm trying to do a predicate to get all Responses with a given personId that have zero Revisions. Using: (personId == %d) && (Request.Revision[SIZE] == 0) in my predicate string gives me a runtime exception "Unsupported function expression Request.Revision[SIZE]" The documentation seems pretty sparse on aggregate functions, only noting that they exist, but with no syntax or examples. Not sure if it's my syntax or if the SIZE function really isn't supported in iOS.

    Read the article

  • Help needed to assign the value of an object of array to a variable

    - by user594861
    Hi All, This is my code: int random=0; int counter=0; while(counter<25) { random=arc4random() % 40; BOOL flag=[array containsObject:[NSNumber numberWithInt:random] ]; if(flag) { counter--; } else { [array addObject:[NSNumber numberWithInt:random]]; int p=[array objectAtIndex:counter]; //**line4 counter++; } } getting a warning on line 4, not able to assign the value of an object of an array to a variable, please help me Thanks

    Read the article

  • C# IndexOutOfRange issue, probably simple.

    - by MWC
    Banging my head off the wall due to this. I'm getting the error at cell[rcell] = repack[counter] even though I have 190 items in the repack array. private string csvtogrid(string input) { input = input.Replace("\r", ",").Substring(2).TrimEnd(',').Trim().Replace("\n", ",").Replace(",,,", ",").Replace(",,",","); string[] repack = input.Split(','); string[] cell = { }; int rcell = 1; for (int counter = 1; counter < repack.Length; counter++) { if (rcell < 4) { cell[rcell] = repack[counter]; rcell++; } procgrid.Rows.Add(cell[1], cell[2], cell[3]); rcell = 1; } richTextBox1.Text = input; return null; }

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • Problem counting item frequency on T-SQL

    - by Raúl Roa
    I'm trying to count the frequency of numbers from 1 to 100 on different fields of a table. Let's say I have the table "Results" with the following data: LottoId Winner Second Third --------- --------- --------- --------- 1 1 2 3 2 1 2 3 I'd like to be able to get the frequency per numbers. For that I'm using the following code: --Creating numbers temp table CREATE TABLE #Numbers( Number int) --Inserting the numbers into the temp table declare @counter int set @counter = 0 while @counter < 100 begin set @counter = @counter + 1 INSERT INTO #Numbers(Number) VALUES(@counter) end -- SELECT #Numbers.Number, Count(Results.Winner) as Winner,Count(Results.Second) as Second, Count(Results.Third) as Third FROM #Numbers LEFT JOIN Results ON #Numbers.Number = Results.Winner OR #Numbers.Number = Results.Second OR #Numbers.Number = Results.Third GROUP BY #Numbers.Number The problem is that the counts are repeating the same values for each number. In this particular case I'm getting the following result: Number Winner Second Third --------- --------- --------- --------- 1 2 2 2 2 2 2 2 3 2 2 2 ... When I should get this: Number Winner Second Third --------- --------- --------- --------- 1 2 0 0 2 0 2 0 3 0 0 2 ... What am I missing?

    Read the article

  • sqrt(int_value + 0.0) -- Does it have a purpose?

    - by Earlz
    Hello, while doing some homework in my very strange C++ book, which I've been told before to throw away, had a very peculiar code segment. I know homework stuff always throws in extra "mystery" to try to confuse you like indenting 2 lines after a single-statement for-loop. But this one I'm confused on because it seems to serve some real-purpose. basically it is like this: int counter=10; ... if(pow(floor(sqrt(counter+0.0)),2) == counter) ... I'm interested in this part especially: sqrt(counter+0.0) Is there some purpose to the +0.0? Is this the poormans way of doing a static cast to a double? Does this avoid some compiler warning on some compiler I do not use? The entire program printed the exact same thing and compiled without warnings on g++ whenever I left out the +0.0 part. Maybe I'm not using a weird enough compiler? Edit: Also, does gcc just break standard and not make an error for Ambiguous reference since sqrt can take 3 different types of parameters? [earlz@EarlzBeta-~/projects/homework1] $ cat calc.cpp #include <cmath> int main(){ int counter=0; sqrt(counter); } [earlz@EarlzBeta-~/projects/homework1] $ g++ calc.cpp /usr/lib/libstdc++.so.47.0: warning: strcpy() is almost always misused, please use strlcpy() /usr/lib/libstdc++.so.47.0: warning: strcat() is almost always misused, please use strlcat() [earlz@EarlzBeta-~/projects/homework1] $

    Read the article

  • Uninitialized array offset

    - by kimmothy16
    Hey everyone, I am using PHP to create a form with an array of fields. Basically you can add an unlimited number of 'people' to the form and each person has a first name, last name, and phone number. The form requires that you add a phone number for the first person only. If you leave the phone number field blank on any others, the handler file is supposed to be programmed to use the phone number from the first person. So, my fields are: person[] - a hidden field with a value that is this person's primary key. fname[] - an input field lname[] - an input field phone[] - an input field My form handler looks like this: $people = $_POST['person'] $counter = 0; foreach($people as $person): if($phone[$counter] == '') { // use $phone[0]'s phone number } else { // use $phone[$counter] number } $counter = $counter + 1; endforeach; PHP doesn't like this though, it is throwing me an Notice: Uninitialized string offset error. I debugged it by running the is_array function on people, fname, lname, and phone and it returns true to being an array. I can also manually echo out $phone[2], etc. and get the correct value. I've also ran is_int on the $counter variable and it returned true, so I'm unsure why this isn't working as intended? Any help would be great!

    Read the article

  • realtime visitors with nodejs & redis & socket.io & php

    - by orhan bengisu
    I am new to these tecnologies. I want to get realtime visitor for each products for my site. I mean a notify like "X users seeing this product". Whenever an user connects to a product counter will be increased for this product and when disconnects counter will be decreased just for this product. I tried to search a lots of documents but i got confused. I am using Predis Library for PHP. What i have done may totaly be wrong. I am not sure Where to put createClient , When to subscribe & When to unsubscribe. What I have done yet: On product detail page: $key = "product_views_".$product_id; $counter = $redis->incr($key); $redis->publish("productCounter", json_encode(array("product_id"=> "1000", "counter"=> $counter ))); In app.js var app = require('express')() , server = require('http').createServer(app) , socket = require('socket.io').listen(server,{ log: false }) , url = require('url') , http= require('http') , qs = require('querystring') ,redis = require("redis"); var connected_socket = 0; server.listen(8080); var productCounter = redis.createClient(); productCounter.subscribe("productCounter"); productCounter.on("message", function(channel, message){ console.log("%s, the message : %s", channel, message); io.sockets.emit(channel,message); } productCounter.on("unsubscribe", function(){ //I think to decrease counter here, Shold I? and How? } io.sockets.on('connection', function (socket) { connected_socket++; socket_id = socket.id; console.log("["+socket_id+"] connected"); socket.on('disconnect', function (socket) { connected_socket--; console.log("Client disconnected"); productCounter.unsubscribe("productCounter"); }); }) Thanks a lot for your answers!

    Read the article

  • I am trying to find how many vowels and consonants in my string in C

    - by John Walter
    #include <stdio.h> #include <string.h> int main() { int i; int counter=0, counter2=0; char *s; char name[30]; char vowel[6] = "AEIOU"; char consonants[21] = "BCDFGHJKLMNPQRSTVWXYZ"; printf ("input the string: "); scanf ("%s", name); printf ("The string is %s\n", name); for (i=0; name[i]!='\0'; i++) { if (s = strchr(vowel, name[i])) { counter++; } else if (s =strchr(consonants, name[i])) { counter2++; } printf ("First counter is %d\n", counter); printf ("The second counter is %d\n", counter2); return 0; } } And the question is, what is wrong with my code? why counter is not working? Because I tried a lot of ways, and nothing works, maybe someone can explain for me.

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Replace string with incremented value

    - by Andrei
    Hello, I'm trying to write a CSS parser to automatically dispatch URLs in background images to different subdomains in order to parallelize downloads. Basically, I want to replace things like url(/assets/some-background-image.png) with url(http://assets[increment].domain.com/assets/some-background-image.png) I'm using this inside a class that I eventually want to evolve into doing various CSS parsing tasks. Here are the relevant parts of the class : private function parallelizeDownloads(){ static $counter = 1; $newURL = "url(http://assets".$counter.".domain.com"; The counter needs to be reset when it reaches 4 in order to limit to 4 subdomains. if ($counter == 4) { $counter = 1; } $counter ++; return $newURL; } public function replaceURLs() { This is mostly nonsense, but I know the code I'm looking for looks somewhat like this. Note : $this-css contains the CSS string. preg_match("/url/i",$this->css,$match); foreach($match as $URL) { $newURL = self::parallelizeDownloads(); $this->css = str_replace($match, $newURL,$this->css); } }

    Read the article

  • PHP - uninitialized array offset

    - by kimmothy16
    Hey everyone, I am using PHP to create a form with an array of fields. Basically you can add an unlimited number of 'people' to the form and each person has a first name, last name, and phone number. The form requires that you add a phone number for the first person only. If you leave the phone number field blank on any others, the handler file is supposed to be programmed to use the phone number from the first person. So, my fields are: person[] - a hidden field with a value that is this person's primary key. fname[] - an input field lname[] - an input field phone[] - an input field my form handler looks like this: $people = $_POST['person'] $counter = 0; foreach($people as $person): if(phone[$counter] == '') { // use $phone[0]'s phone number } else { // use $phone[$counter] number } $counter = $counter + 1; endforeach; PHP doesn't like this though, it is throwing me an Notice: Uninitialized string offset error. I debugged it by running the is_array function on people, fname, lname, and phone and it returns true to being an array. I can also manually echo out $phone[2], etc. and get the correct value. I've also ran is_int on the $counter variable and it returned true, so I'm unsure why this isn't working as intended? Any help would be great!

    Read the article

  • [php] Firefox refreshes page 1, even after redirection to page 2

    - by Znarkus
    Hi! This is a quite weird and annoying problem, which is reproduced with the script below. Say we have two pages: script.php and script.php?second. Page 1 creates some database entries and redirects to page 2. On page 2, the user is presented with an editor for said entries. If page 1 for some reason crashes on the first try, and prints some error message, a strange thing will happend. If we refresh page 1 (and this time it redirects fine), every consecutive refresh (of page 2) will actually refresh page 1 and again redirect to page 2. In the above example this would create new database entries for every refresh, which is the problem I want to circumvent by redirecting to page 2. <?php header('Content-type: text/plain'); session_start(); if (!isset($_GET['second'])) { $_SESSION['counter'] = isset($_SESSION['counter']) ? $_SESSION['counter'] + 1 : 1; /*$_SESSION['counter'] = 0; exit('asd');*/ header("Location: {$_SERVER['PHP_SELF']}?second", true, 303); exit; } echo "Counter: {$_SESSION['counter']}"; To try the above complete script, first run it with the commented code intact, then by enabling the commented code. I've tried 301, 302 and 303 redirections. Does someone know why this is happening?

    Read the article

  • Trying to write priority queue in Java but getting "Exception in thread "main" java.lang.ClassCastEx

    - by Dan
    For my data structure class, I am trying to write a program that simulates a car wash and I want to give fancy cars a higher priority than regular ones using a priority queue. The problem I am having has something to do with Java not being able to type cast "Object" as an "ArrayQueue" (a simple FIFO implementation). What am I doing wrong and how can I fix it? public class PriorityQueue<E> { private ArrayQueue<E>[] queues; private int highest=0; private int manyItems=0; public PriorityQueue(int h) { highest=h; queues = (ArrayQueue[]) new Object[highest+1]; <----problem is here } public void add(E item, int priority) { queues[priority].add(item); manyItems++; } public boolean isEmpty( ) { return (manyItems == 0); } public E remove() { E answer=null; int counter=0; do { if(!queues[highest-counter].isEmpty()) { answer = queues[highest-counter].remove(); counter=highest+1; } else counter++; }while(highest-counter>=0); return answer; } }

    Read the article

  • Infile incomplete type error

    - by kd7vdb
    I am building a program that takes a input file in this format: title author title author etc and outputs to screen title (author) title (author) etc The Problem I am currently getting is a error "ifstream infile has incomplee type and cannot be defined" #include <iostream> #include <string> #include <ifstream> using namespace std; string bookTitle [14]; string bookAuthor [14]; int loadData (string pathname); void showall (int counter); int main () { int counter; string pathname; cout<<"Input the name of the file to be accessed: "; cin>>pathname; loadData (pathname); showall (counter); } int loadData (string pathname) // Loads data from infile into arrays { ifstream infile; int counter = 0; infile.open(pathname); //Opens file from user input in main if( infile.fail() ) { cout << "File failed to open"; return 0; } while (!infile.eof()) { infile >> bookTitle [14]; //takes input and puts into parallel arrays infile >> bookAuthor [14]; counter++; } infile.close; } void showall (int counter) // shows input in title(author) format { cout<<bookTitle<<"("<<bookAuthor<<")"; } Thanks ahead of time, kd7vdb

    Read the article

  • Java loop to collect the second and third elements every three in an array

    - by mhollander38
    I have a file with data in the form timestamp, coordinate, coordinate, seperated by spaces, as here; 14:25:01.215 370.0 333.0 I need to loop through and add the coordinates only to an array. The data from the file is read in and put into as String[] called info, from split(" "). I have two problems, I think the end of the file has a extra " " which I need to lose appropriately and I also want confirmation/suggestions of my loop, at the moment I am getting sporadic out of bounds exceptions. My loop is as follows; String[] info; info = dataHolder.split(" "); ArrayList<String> coOrds1 = new ArrayList<String>(); for (int counter = 0; counter < info.length; counter = counter+3) { coOrds1.add(info[counter+1]); coOrds1.add(info[counter+2]); } Help and suggestions appreciated. The text file is here but the class receives in a UDP packet from another class so I am unsure if this potentially adds " " at the end or not.

    Read the article

  • Timer in Java swing

    - by Yesha
    I'm trying to replace Thread.sleep with a java swing timer as I hear that is much better for graphics. Before, I had something set up like this, but it was interfering with the graphics. while(counter < array.size){ Thread.sleep(array.get(counter).startTime); //do first task Thread.sleep(array.get(counter).secondTime); //do second task Thread.sleep(array.get(counter).thirdTime); //do third task counter++ } Now, I'm trying to replace each Thread.sleep with one of these and then I have the actual events that happen after this, but it does not seem to be waiting at all. int test = array.get(counter).time; ActionListener taskPerformer = new ActionListener(){ public void actionPerformed(ActionEvent evt){ } }; Timer t = new Timer(test, taskPerformer); t.setRepeats(false); t.start(); Basically, how do I ensure that the program will wait without giving it any code to execute inside of the timer? Thank you!

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

  • Matlab N-Queen Problem

    - by Kay
    main.m counter = 1; n = 8; board = zeros(1,n); back(0, board); disp(counter); sol.m function value = sol(board) for ( i = 1:(length(board))) for ( j = (i+1): (length(board)-1)) if (board(i) == board(j)) value = 0; return; end if ((board(i) - board(j)) == (i-j)) value = 0; return; end if ((board(i) - board(j)) == (j-i)) value = 0; return; end end end value = 1; return; back.m function back(depth, board) disp(board); if ( (depth == length(board)) && (sol2(board) == 1)) counter = counter + 1; end if ( depth < length(board)) for ( i = 0:length(board)) board(1,depth+1) = i; depth = depth + 1; solv2(depth, board); end end I'm attempting to find the maximum number of ways n-queen can be placed on an n-by-n board such that those queens aren't attacking eachother. I cannot figure out the problem with the above matlab code, i doubt it's a problem with my logic since i've tested out this logic in java and it seems to work perfectly well there. The code compiles but the issue is that the results it produces are erroneous. Java Code which works: public static int counter=0; public static boolean isSolution(final int[] board){ for (int i = 0; i < board.length; i++) { for (int j = i + 1; j < board.length; j++) { if (board[i] == board[j]) return false; if (board[i]-board[j] == i-j) return false; if (board[i]-board[j] == j-i) return false; } } return true; } public static void solve(int depth, int[] board){ if (depth == board.length && isSolution(board)) { counter++; } if (depth < board.length) { // try all positions of the next row for (int i = 0; i < board.length; i++) { board[depth] = i; solve(depth + 1, board); } } } public static void main(String[] args){ int n = 8; solve(0, new int[n]); System.out.println(counter); }

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >