Search Results

Search found 1525 results on 61 pages for 'pair'.

Page 20/61 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How to configure mercurial access controls using apache and hgweb?

    - by Gj1
    I have set up a mercurial repo to be served using apache+wsgi+hgweb on OS X. It is now completely open to anyone who stumbles upon my server on the correct port number.. How can I set it up so that only people with a username+password pair that I approve can pull and/or push from the repo? I know how to very easily achieve this using ssh, but in this specific case the requirement is that the solution doesn't require defining full fledged user accounts on the machine for each person whom I'd like to give access to the repo.

    Read the article

  • Cisco - Zone Policy Actions (pass, inspect, drop, log) - What is the difference?

    - by Jonathan Rioux
    Have these commands for instance: policy-map type inspect IN-OUT_PlcyMAP class type inspect IN-OUT_ClassMAP inspect <------ policy-map type inspect IN-OUT_PlcyMap class type inspect IN-OUT_ClassMAP pass <------ zone security INSIDE zone security OUTSIDE zone-pair security IN->OUT source INSIDE destination OUTSIDE service-policy type inspect IN-OUT_PlcyMAP What is the difference between "inspect", "pass", "drop", "log", and "reset ? I could not found any information on this on Google.

    Read the article

  • Key based authentication (SFTP) failed

    - by rahularyansharma
    I created a pair or RSA keys using Putty key generator, The Public key is attached set on the server side. The private key at windows client machine and being used with pageant and FileZila and working fine. Now Problem is that when I want to connect same sftp through PSFTP commandline tool, it failes. if possible please provide steps to setup ssh key on windows client to access sftp using psftp or direct through batch file.

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Elusive race condition in Java

    - by nasufara
    I am creating a graphing calculator. In an attempt to squeeze some more performance out of it, I added some multithreaded to the line calculator. Essentially what my current implementation does is construct a thread-safe Queue of X values, then start however many threads it needs, each one calculating a point on the line using the queue to get its values, and then ordering the points using a HashMap when the calculations are done. This implementation works great, and that's not where my race condition is (merely some background info). In examining the performance results from this, I found that the HashMap is a performance bottleneck, since I do that synchronously on one thread. So I figured that ordering each point as its calculated would work best. I tried a PriorityQueue, but that was slower than the HashMap. I ended up creating an algorithm that essentially works like this: I construct a list of X values to calculate, like in my current algorithm. I then copy that list of values into another class, unimaginatively and temporarily named BlockingList, which is responsible for ordering the points as they are calculated. BlockingList contains a put() method, which takes in two BigDecimals as parameters, the first the X value, the second the calculated Y value. put() will only accept a value if the X value is the next one on the list to be accepted in the list of X values, and will block until another thread gives it the next excepted value. For example, since that can be confusing, say I have two threads, Thread-1 and Thread-2. Thread-2 gets the X value 10.0 from the values queue, and Thread-1 gets 9.0. However, Thread-1 completes its calculations first, and calls put() before Thread-2 does. Because BlockingList is expecting to get 10.0 first, and not 9.0, it will block on Thread-1 until Thread-2 finishes and calls put(). Once Thread-2 gives BlockingList 10.0, it notify()s all waiting threads, and expects 9.0 next. This continues until BlockingList gets all of its expected values. (I apologise if that was hard to follow, if you need more clarification, just ask.) As expected by the question title, there is a race condition in here. If I run it without any System.out.printlns, it will sometimes lock because of conflicting wait() and notifyAll()s, but if I put a println in, it will run great. A small implementation of this is included below, and exhibits the same behavior: import java.math.BigDecimal; import java.util.concurrent.ConcurrentLinkedQueue; public class Example { public static void main(String[] args) throws InterruptedException { // Various scaling values, determined based on the graph size // in the real implementation BigDecimal xMax = new BigDecimal(10); BigDecimal xStep = new BigDecimal(0.05); // Construct the values list, from -10 to 10 final ConcurrentLinkedQueue<BigDecimal> values = new ConcurrentLinkedQueue<BigDecimal>(); for (BigDecimal i = new BigDecimal(-10); i.compareTo(xMax) <= 0; i = i.add(xStep)) { values.add(i); } // Contains the calculated values final BlockingList list = new BlockingList(values); for (int i = 0; i < 4; i++) { new Thread() { public void run() { BigDecimal x; // Keep looping until there are no more values while ((x = values.poll()) != null) { PointPair pair = new PointPair(); pair.realX = x; try { list.put(pair); } catch (Exception ex) { ex.printStackTrace(); } } } }.start(); } } private static class PointPair { public BigDecimal realX; } private static class BlockingList { private final ConcurrentLinkedQueue<BigDecimal> _values; private final ConcurrentLinkedQueue<PointPair> _list = new ConcurrentLinkedQueue<PointPair>(); public BlockingList(ConcurrentLinkedQueue<BigDecimal> expectedValues) throws InterruptedException { // Copy the values into a new queue BigDecimal[] arr = expectedValues.toArray(new BigDecimal[0]); _values = new ConcurrentLinkedQueue<BigDecimal>(); for (BigDecimal dec : arr) { _values.add(dec); } } public void put(PointPair item) throws InterruptedException { while (item.realX.compareTo(_values.peek()) != 0) { synchronized (this) { // Block until someone enters the next desired value wait(); } } _list.add(item); _values.poll(); synchronized (this) { notifyAll(); } } } } My question is can anybody help me find the threading error? Thanks!

    Read the article

  • Custom Memory Allocator for STL map

    - by Prasoon Tiwari
    This question is about construction of instances of custom allocator during insertion into a std::map. Here is a custom allocator for std::map<int,int> along with a small program that uses it: #include <stddef.h> #include <stdio.h> #include <map> #include <typeinfo> class MyPool { public: void * GetNext() { return malloc(24); } void Free(void *ptr) { free(ptr); } }; template<typename T> class MyPoolAlloc { public: static MyPool *pMyPool; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; typedef T value_type; template<typename X> struct rebind { typedef MyPoolAlloc<X> other; }; MyPoolAlloc() throw() { printf("-------Alloc--CONSTRUCTOR--------%08x %32s\n", this, typeid(T).name()); } MyPoolAlloc(const MyPoolAlloc&) throw() { printf(" Copy Constructor ---------------%08x %32s\n", this, typeid(T).name()); } template<typename X> MyPoolAlloc(const MyPoolAlloc<X>&) throw() { printf(" Construct T Alloc from X Alloc--%08x %32s %32s\n", this, typeid(T).name(), typeid(X).name()); } ~MyPoolAlloc() throw() { printf(" Destructor ---------------------%08x %32s\n", this, typeid(T).name()); }; pointer address(reference __x) const { return &__x; } const_pointer address(const_reference __x) const { return &__x; } pointer allocate(size_type __n, const void * hint = 0) { if (__n != 1) perror("MyPoolAlloc::allocate: __n is not 1.\n"); if (NULL == pMyPool) { pMyPool = new MyPool(); printf("======>Creating a new pool object.\n"); } return reinterpret_cast<T*>(pMyPool->GetNext()); } //__p is not permitted to be a null pointer void deallocate(pointer __p, size_type __n) { pMyPool->Free(reinterpret_cast<void *>(__p)); } size_type max_size() const throw() { return size_t(-1) / sizeof(T); } void construct(pointer __p, const T& __val) { printf("+++++++ %08x %s.\n", __p, typeid(T).name()); ::new(__p) T(__val); } void destroy(pointer __p) { printf("-+-+-+- %08x.\n", __p); __p->~T(); } }; template<typename T> inline bool operator==(const MyPoolAlloc<T>&, const MyPoolAlloc<T>&) { return true; } template<typename T> inline bool operator!=(const MyPoolAlloc<T>&, const MyPoolAlloc<T>&) { return false; } template<typename T> MyPool* MyPoolAlloc<T>::pMyPool = NULL; int main(int argc, char *argv[]) { std::map<int, int, std::less<int>, MyPoolAlloc<std::pair<const int,int> > > m; //random insertions in the map m.insert(std::pair<int,int>(1,2)); m[5] = 7; m[8] = 11; printf("======>End of map insertions.\n"); return 0; } Here is the output of this program: -------Alloc--CONSTRUCTOR--------bffcdaa6 St4pairIKiiE Construct T Alloc from X Alloc--bffcda77 St13_Rb_tree_nodeISt4pairIKiiEE St4pairIKiiE Copy Constructor ---------------bffcdad8 St13_Rb_tree_nodeISt4pairIKiiEE Destructor ---------------------bffcda77 St13_Rb_tree_nodeISt4pairIKiiEE Destructor ---------------------bffcdaa6 St4pairIKiiE ======Creating a new pool object. Construct T Alloc from X Alloc--bffcd9df St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE +++++++ 0985d028 St4pairIKiiE. Destructor ---------------------bffcd9df St4pairIKiiE Construct T Alloc from X Alloc--bffcd95f St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE +++++++ 0985d048 St4pairIKiiE. Destructor ---------------------bffcd95f St4pairIKiiE Construct T Alloc from X Alloc--bffcd95f St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE +++++++ 0985d068 St4pairIKiiE. Destructor ---------------------bffcd95f St4pairIKiiE ======End of map insertions. Construct T Alloc from X Alloc--bffcda23 St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE -+-+-+- 0985d068. Destructor ---------------------bffcda23 St4pairIKiiE Construct T Alloc from X Alloc--bffcda43 St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE -+-+-+- 0985d048. Destructor ---------------------bffcda43 St4pairIKiiE Construct T Alloc from X Alloc--bffcda43 St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE -+-+-+- 0985d028. Destructor ---------------------bffcda43 St4pairIKiiE Destructor ---------------------bffcdad8 St13_Rb_tree_nodeISt4pairIKiiEE Last two columns of the output show that an allocator for std::pair<const int, int> is constructed everytime there is a insertion into the map. Why is this necessary? Is there a way to suppress this? Thanks! Edit: This code tested on x86 machine with g++ version 4.1.2. If you wish to run it on a 64-bit machine, you'll have to change at least the line return malloc(24). Changing to return malloc(48) should work.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Where can these be posted besides the Python Cookbook?

    - by Noctis Skytower
    Whitespace Assembler #! /usr/bin/env python """Assembler.py Compiles a program from "Assembly" folder into "Program" folder. Can be executed directly by double-click or on the command line. Give name of *.WSA file without extension (example: stack_calc).""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 3 $' ################################################################################ import string from Interpreter import INS, MNEMONIC ################################################################################ def parse(code): program = [] process_virtual(program, code) process_control(program) return tuple(program) def process_virtual(program, code): for line, text in enumerate(code.split('\n')): if not text or text[0] == '#': continue if text.startswith('part '): parse_part(program, line, text[5:]) elif text.startswith(' '): parse_code(program, line, text[5:]) else: syntax_error(line) def syntax_error(line): raise SyntaxError('Line ' + str(line + 1)) ################################################################################ def process_control(program): parts = get_parts(program) names = dict(pair for pair in zip(parts, generate_index())) correct_control(program, names) def get_parts(program): parts = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins if ins == INS.PART: if arg in parts: raise NameError('Part definition was found twice: ' + arg) parts.append(arg) return parts def generate_index(): index = 1 while True: yield index index *= -1 if index > 0: index += 1 def correct_control(program, names): for index, ins in enumerate(program): if isinstance(ins, tuple): ins, arg = ins if ins in HAS_LABEL: if arg not in names: raise NameError('Part definition was never found: ' + arg) program[index] = (ins, names[arg]) ################################################################################ def parse_part(program, line, text): if not valid_label(text): syntax_error(line) program.append((INS.PART, text)) def valid_label(text): if not between_quotes(text): return False label = text[1:-1] if not valid_name(label): return False return True def between_quotes(text): if len(text) < 3: return False if text.count('"') != 2: return False if text[0] != '"' or text[-1] != '"': return False return True def valid_name(label): valid_characters = string.ascii_letters + string.digits + '_' valid_set = frozenset(valid_characters) label_set = frozenset(label) if len(label_set - valid_set) != 0: return False return True ################################################################################ from Interpreter import HAS_LABEL, Program NO_ARGS = Program.NO_ARGS HAS_ARG = Program.HAS_ARG TWO_WAY = tuple(set(NO_ARGS) & set(HAS_ARG)) ################################################################################ def parse_code(program, line, text): for ins, word in enumerate(MNEMONIC): if text.startswith(word): check_code(program, line, text[len(word):], ins) break else: syntax_error(line) def check_code(program, line, text, ins): if ins in TWO_WAY: if text: number = parse_number(line, text) program.append((ins, number)) else: program.append(ins) elif ins in HAS_LABEL: text = parse_label(line, text) program.append((ins, text)) elif ins in HAS_ARG: number = parse_number(line, text) program.append((ins, number)) elif ins in NO_ARGS: if text: syntax_error(line) program.append(ins) else: syntax_error(line) def parse_label(line, text): if not text or text[0] != ' ': syntax_error(line) text = text[1:] if not valid_label(text): syntax_error(line) return text ################################################################################ def parse_number(line, text): if not valid_number(text): syntax_error(line) return int(text) def valid_number(text): if len(text) < 2: return False if text[0] != ' ': return False text = text[1:] if '+' in text and '-' in text: return False if '+' in text: if text.count('+') != 1: return False if text[0] != '+': return False text = text[1:] if not text: return False if '-' in text: if text.count('-') != 1: return False if text[0] != '-': return False text = text[1:] if not text: return False valid_set = frozenset(string.digits) value_set = frozenset(text) if len(value_set - valid_set) != 0: return False return True ################################################################################ ################################################################################ from Interpreter import partition_number VMC_2_TRI = { (INS.PUSH, True): (0, 0), (INS.COPY, False): (0, 2, 0), (INS.COPY, True): (0, 1, 0), (INS.SWAP, False): (0, 2, 1), (INS.AWAY, False): (0, 2, 2), (INS.AWAY, True): (0, 1, 2), (INS.ADD, False): (1, 0, 0, 0), (INS.SUB, False): (1, 0, 0, 1), (INS.MUL, False): (1, 0, 0, 2), (INS.DIV, False): (1, 0, 1, 0), (INS.MOD, False): (1, 0, 1, 1), (INS.SET, False): (1, 1, 0), (INS.GET, False): (1, 1, 1), (INS.PART, True): (2, 0, 0), (INS.CALL, True): (2, 0, 1), (INS.GOTO, True): (2, 0, 2), (INS.ZERO, True): (2, 1, 0), (INS.LESS, True): (2, 1, 1), (INS.BACK, False): (2, 1, 2), (INS.EXIT, False): (2, 2, 2), (INS.OCHR, False): (1, 2, 0, 0), (INS.OINT, False): (1, 2, 0, 1), (INS.ICHR, False): (1, 2, 1, 0), (INS.IINT, False): (1, 2, 1, 1) } ################################################################################ def to_trinary(program): trinary_code = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins trinary_code.extend(VMC_2_TRI[(ins, True)]) trinary_code.extend(from_number(arg)) else: trinary_code.extend(VMC_2_TRI[(ins, False)]) return tuple(trinary_code) def from_number(arg): code = [int(arg < 0)] if arg: for bit in reversed(list(partition_number(abs(arg), 2))): code.append(bit) return code + [2] return code + [0, 2] to_ws = lambda trinary: ''.join(' \t\n'[index] for index in trinary) def compile_wsa(source): program = parse(source) trinary = to_trinary(program) ws_code = to_ws(trinary) return ws_code ################################################################################ ################################################################################ import os import sys import time import traceback def main(): name, source, command_line, error = get_source() if not error: start = time.clock() try: ws_code = compile_wsa(source) except: print('ERROR: File could not be compiled.\n') traceback.print_exc() error = True else: path = os.path.join('Programs', name + '.ws') try: open(path, 'w').write(ws_code) except IOError as err: print(err) error = True else: div, mod = divmod((time.clock() - start) * 1000, 1) args = int(div), '{:.3}'.format(mod)[1:] print('DONE: Comipled in {}{} ms'.format(*args)) handle_close(error, command_line) def get_source(): if len(sys.argv) > 1: command_line = True name = sys.argv[1] else: command_line = False try: name = input('Source File: ') except: return None, None, False, True print() path = os.path.join('Assembly', name + '.wsa') try: return name, open(path).read(), command_line, False except IOError as err: print(err) return None, None, command_line, True def handle_close(error, command_line): if error: usage = 'Usage: {} <assembly>'.format(os.path.basename(sys.argv[0])) print('\n{}\n{}'.format('-' * len(usage), usage)) if not command_line: time.sleep(10) ################################################################################ if __name__ == '__main__': main() Whitespace Helpers #! /usr/bin/env python """Helpers.py Includes a function to encode Python strings into my WSA format. Has a "PRINT_LINE" function that can be copied to a WSA program. Contains a "PRINT" function and documentation as an explanation.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 1 $' ################################################################################ def encode_string(string, addr): print(' push', addr) print(' push', len(string)) print(' set') addr += 1 for offset, character in enumerate(string): print(' push', addr + offset) print(' push', ord(character)) print(' set') ################################################################################ # Prints a string with newline. # push addr # call "PRINT_LINE" """ part "PRINT_LINE" call "PRINT" push 10 ochr back """ ################################################################################ # def print(array): # if len(array) <= 0: # return # offset = 1 # while len(array) - offset >= 0: # ptr = array.ptr + offset # putch(array[ptr]) # offset += 1 """ part "PRINT" # Line 1-2 copy get less "__PRINT_RET_1" copy get zero "__PRINT_RET_1" # Line 3 push 1 # Line 4 part "__PRINT_LOOP" copy copy 2 get swap sub less "__PRINT_RET_2" # Line 5 copy 1 copy 1 add # Line 6 get ochr # Line 7 push 1 add goto "__PRINT_LOOP" part "__PRINT_RET_2" away part "__PRINT_RET_1" away back """ Whitespace Interpreter #! /usr/bin/env python """Interpreter.py Runs programs in "Programs" and creates *.WSO files when needed. Can be executed directly by double-click or on the command line. If run on command line, add "ASM" flag to dump program assembly.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 4 $' ################################################################################ def test_file(path): disassemble(parse(trinary(load(path))), True) ################################################################################ load = lambda ws: ''.join(c for r in open(ws) for c in r if c in ' \t\n') trinary = lambda ws: tuple(' \t\n'.index(c) for c in ws) ################################################################################ def enum(names): names = names.replace(',', ' ').split() space = dict((reversed(pair) for pair in enumerate(names)), __slots__=()) return type('enum', (object,), space)() INS = enum('''\ PUSH, COPY, SWAP, AWAY, \ ADD, SUB, MUL, DIV, MOD, \ SET, GET, \ PART, CALL, GOTO, ZERO, LESS, BACK, EXIT, \ OCHR, OINT, ICHR, IINT''') ################################################################################ def parse(code): ins = iter(code).__next__ program = [] while True: try: imp = ins() except StopIteration: return tuple(program) if imp == 0: # [Space] parse_stack(ins, program) elif imp == 1: # [Tab] imp = ins() if imp == 0: # [Tab][Space] parse_math(ins, program) elif imp == 1: # [Tab][Tab] parse_heap(ins, program) else: # [Tab][Line] parse_io(ins, program) else: # [Line] parse_flow(ins, program) def parse_number(ins): sign = ins() if sign == 2: raise StopIteration() buffer = '' code = ins() if code == 2: raise StopIteration() while code != 2: buffer += str(code) code = ins() if sign == 1: return int(buffer, 2) * -1 return int(buffer, 2) ################################################################################ def parse_stack(ins, program): code = ins() if code == 0: # [Space] number = parse_number(ins) program.append((INS.PUSH, number)) elif code == 1: # [Tab] code = ins() number = parse_number(ins) if code == 0: # [Tab][Space] program.append((INS.COPY, number)) elif code == 1: # [Tab][Tab] raise StopIteration() else: # [Tab][Line] program.append((INS.AWAY, number)) else: # [Line] code = ins() if code == 0: # [Line][Space] program.append(INS.COPY) elif code == 1: # [Line][Tab] program.append(INS.SWAP) else: # [Line][Line] program.append(INS.AWAY) def parse_math(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.ADD) elif code == 1: # [Space][Tab] program.append(INS.SUB) else: # [Space][Line] program.append(INS.MUL) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.DIV) elif code == 1: # [Tab][Tab] program.append(INS.MOD) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_heap(ins, program): code = ins() if code == 0: # [Space] program.append(INS.SET) elif code == 1: # [Tab] program.append(INS.GET) else: # [Line] raise StopIteration() def parse_io(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.OCHR) elif code == 1: # [Space][Tab] program.append(INS.OINT) else: # [Space][Line] raise StopIteration() elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.ICHR) elif code == 1: # [Tab][Tab] program.append(INS.IINT) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_flow(ins, program): code = ins() if code == 0: # [Space] code = ins() label = parse_number(ins) if code == 0: # [Space][Space] program.append((INS.PART, label)) elif code == 1: # [Space][Tab] program.append((INS.CALL, label)) else: # [Space][Line] program.append((INS.GOTO, label)) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] label = parse_number(ins) program.append((INS.ZERO, label)) elif code == 1: # [Tab][Tab] label = parse_number(ins) program.append((INS.LESS, label)) else: # [Tab][Line] program.append(INS.BACK) else: # [Line] code = ins() if code == 2: # [Line][Line] program.append(INS.EXIT) else: # [Line][Space] or [Line][Tab] raise StopIteration() ################################################################################ MNEMONIC = '\ push copy swap away add sub mul div mod set get part \ call goto zero less back exit ochr oint ichr iint'.split() HAS_ARG = [getattr(INS, name) for name in 'PUSH COPY AWAY PART CALL GOTO ZERO LESS'.split()] HAS_LABEL = [getattr(INS, name) for name in 'PART CALL GOTO ZERO LESS'.split()] def disassemble(program, names=False): if names: names = create_names(program) for ins in program: if isinstance(ins, tuple): ins, arg = ins assert ins in HAS_ARG has_arg = True else: assert INS.PUSH <= ins <= INS.IINT has_arg = False if ins == INS.PART: if names: print(MNEMONIC[ins], '"' + names[arg] + '"') else: print(MNEMONIC[ins], arg) elif has_arg and ins in HAS_ARG: if ins in HAS_LABEL and names: assert arg in names print(' ' + MNEMONIC[ins], '"' + names[arg] + '"') else: print(' ' + MNEMONIC[ins], arg) else: print(' ' + MNEMONIC[ins]) ################################################################################ def create_names(program): names = {} number = 1 for ins in program: if isinstance(ins, tuple) and ins[0] == INS.PART: label = ins[1] assert label not in names names[label] = number_to_name(number) number += 1 return names def number_to_name(number): name = '' for offset in reversed(list(partition_number(number, 27))): if offset: name += chr(ord('A') + offset - 1) else: name += '_' return name def partition_number(number, base): div, mod = divmod(number, base) yield mod while div: div, mod = divmod(div, base) yield mod ################################################################################ CODE = (' \t\n', ' \n ', ' \t \t\n', ' \n\t', ' \n\n', ' \t\n \t\n', '\t ', '\t \t', '\t \n', '\t \t ', '\t \t\t', '\t\t ', '\t\t\t', '\n \t\n', '\n \t \t\n', '\n \n \t\n', '\n\t \t\n', '\n\t\t \t\n', '\n\t\n', '\n\n\n', '\t\n ', '\t\n \t', '\t\n\t ', '\t\n\t\t') EXAMPLE = ''.join(CODE) ################################################################################ NOTES = '''\ STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint''' ################################################################################ ################################################################################ class Stack: def __init__(self): self.__data = [] # Stack Operators def push(self, number): self.__data.append(number) def copy(self, number=None): if number is None: self.__data.append(self.__data[-1]) else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size self.__data.append(self.__data[index]) def swap(self): self.__data[-2], self.__data[-1] = self.__data[-1], self.__data[-2] def away(self, number=None): if number is None: self.__data.pop() else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size del self.__data[index:-1] # Math Operators def add(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix + suffix) def sub(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix - suffix) def mul(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix * suffix) def div(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix // suffix) def mod(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix % suffix) # Program Operator def pop(self): return self.__data.pop() ################################################################################ class Heap: def __init__(self): self.__data = {} def set_(self, addr, item): if item: self.__data[addr] = item elif addr in self.__data: del self.__data[addr] def get_(self, addr): return self.__data.get(addr, 0) ################################################################################ import os import zlib import msvcrt import pickle import string class CleanExit(Exception): pass NOP = lambda arg: None DEBUG_WHITESPACE = False ################################################################################ class Program: NO_ARGS = INS.COPY, INS.SWAP, INS.AWAY, INS.ADD, \ INS.SUB, INS.MUL, INS.DIV, INS.MOD, \ INS.SET, INS.GET, INS.BACK, INS.EXIT, \ INS.OCHR, INS.OINT, INS.ICHR, INS.IINT HAS_ARG = INS.PUSH, INS.COPY, INS.AWAY, INS.PART, \ INS.CALL, INS.GOTO, INS.ZERO, INS.LESS def __init__(self, code): self.__data = code self.__validate() self.__build_jump() self.__check_jump() self.__setup_exec() def __setup_exec(self): self.__iptr = 0 self.__stck = stack = Stack() self.__heap = Heap() self.__cast = [] self.__meth = (stack.push, stack.copy, stack.swap, stack.away, stack.add, stack.sub, stack.mul, stack.div, stack.mod, self.__set, self.__get, NOP, self.__call, self.__goto, self.__zero, self.__less, self.__back, self.__exit, self.__ochr, self.__oint, self.__ichr, self.__iint) def step(self): ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def run(self): while True: ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def __oint(self): for digit in str(self.__stck.pop()): msvcrt.putwch(digit) def __ichr(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() while True: char = msvcrt.getwch() if char in '\x00\xE0': msvcrt.getwch() elif char in string.printable: char = char.replace('\r', '\n') msvcrt.putwch(char) break item = ord(char) # Storing Number self.__heap.set_(addr, item) def __iint(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() buff = '' char = msvcrt.getwch() while char != '\r' or not buff: if char in '\x00\xE0': msvcrt.getwch() elif char in '+-' and not buff: msvcrt.putwch(char) buff += char elif '0' <= char <= '9': msvcrt.putwch(char) buff += char elif char == '\b': if buff: buff = buff[:-1] msvcrt.putwch(char) msvcrt.putwch(' ') msvcrt.putwch(char) char = msvcrt.getwch() msvcrt.putwch(char) msvcrt.putwch('\n') item = int(buff) # Storing Number self.__heap.set_(addr, item) def __goto(self, label): self.__iptr = self.__jump[label] def __zero(self, label): if self.__stck.pop() == 0: self.__iptr = self.__jump[label] def __less(self, label): if self.__stck.pop() < 0: self.__iptr = self.__jump[label] def __exit(self): self.__setup_exec() raise CleanExit() def __set(self): item = self.__stck.pop() addr = self.__stck.po

    Read the article

  • Re: Help with Boost Grammar

    - by Decmac04
    I have redesigned and extended the grammar I asked about earlier as shown below: // BIFAnalyser.cpp : Defines the entry point for the console application. // // /*============================================================================= Copyright (c) Temitope Jos Onunkun 2010 http://www.dcs.kcl.ac.uk/pg/onun/ Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) =============================================================================*/ //////////////////////////////////////////////////////////////////////////// // // // B Machine parser using the Boost "Grammar" and "Semantic Actions". // // // //////////////////////////////////////////////////////////////////////////// include include include include include include //////////////////////////////////////////////////////////////////////////// using namespace std; using namespace boost::spirit; //////////////////////////////////////////////////////////////////////////// // // Semantic Actions // //////////////////////////////////////////////////////////////////////////// // // namespace { //semantic action function on individual lexeme void do_noint(char const* start, char const* end) { string str(start, end); if (str != "NAT1") cout << "PUSH(" << str << ')' << endl; } //semantic action function on addition of lexemes void do_add(char const*, char const*) { cout << "ADD" << endl; // for(vector::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) // cout << *vi << " "; } //semantic action function on subtraction of lexemes void do_subt(char const*, char const*) { cout << "SUBTRACT" << endl; } //semantic action function on multiplication of lexemes void do_mult(char const*, char const*) { cout << "\nMULTIPLY" << endl; } //semantic action function on division of lexemes void do_div(char const*, char const*) { cout << "\nDIVIDE" << endl; } // // vector flowTable; //semantic action function on simple substitution void do_sSubst(char const* start, char const* end) { string str(start, end); //use boost tokenizer to break down tokens typedef boost::tokenizer Tokenizer; boost::char_separator sep(" -+/*:=()",0,boost::drop_empty_tokens); // char separator definition Tokenizer tok(str, sep); Tokenizer::iterator tok_iter = tok.begin(); pair dependency; //create a pair object for dependencies //create a vector object to store all tokens vector dx; // int counter = 0; // tracks token position for(tok.begin(); tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } counter = dx.size(); // vector d_hat; //stores set of dependency pairs string dep; //pairs variables as string object // dependency.first = *tok.begin(); vector FV; for(int unsigned i=1; i < dx.size(); i++) { // if(!atoi(dx.at(i).c_str()) && (dx.at(i) !=" ")) { dependency.second = dx.at(i); dep = dependency.first + "|-" + dependency.second + " "; d_hat.push_back(dep); vector<string> row; row.push_back(dependency.first); //push x_hat into first column of each row for(unsigned int j=0; j<2; j++) { row.push_back(dependency.second);//push an element (column) into the row } flowTable.push_back(row); //Add the row to the main vector } } //displays internal representation of information flow table cout << "\n****************\nDependency Table\n****************\n"; cout << "X_Hat\tDx\tG_Hat\n"; cout << "-----------------------------\n"; for(unsigned int i=0; i < flowTable.size(); i++) { for(unsigned int j=0; j<2; j++) { cout << flowTable[i][j] << "\t "; } if (*tok.begin() != "WHILE" ) //if there are no global flows, cout << "\t{}"; //display empty set cout << "\n"; } cout << "***************\n\n"; for(int unsigned j=0; j < FV.size(); j++) { if(FV.at(j) != dependency.second) dep = dependency.first + "|-" + dependency.second + " "; d_hat.push_back(dep); } cout << "PUSH(" << str << ')' << endl; cout << "\n*******\nDependency pairs\n*******\n"; for(int unsigned i=0; i < d_hat.size(); i++) cout << d_hat.at(i) << "\n...\n"; cout << "\nSIMPLE SUBSTITUTION\n\n"; } //semantic action function on multiple substitution void do_mSubst(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; //cout << "\nMULTIPLE SUBSTITUTION\n\n"; } //semantic action function on unbounded choice substitution void do_mChoice(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nUNBOUNDED CHOICE SUBSTITUTION\n\n"; } void do_logicExpr(char const* start, char const* end) { string str(start, end); //use boost tokenizer to break down tokens typedef boost::tokenizer Tokenizer; boost::char_separator sep(" -+/*=:()<",0,boost::drop_empty_tokens); // char separator definition Tokenizer tok(str, sep); Tokenizer::iterator tok_iter = tok.begin(); //pair dependency; //create a pair object for dependencies //create a vector object to store all tokens vector dx; for(tok.begin(); tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } for(unsigned int i=0; i cout << "PUSH(" << str << ')' << endl; cout << "\nPREDICATE\n\n"; } void do_predicate(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nMULTIPLE PREDICATE\n\n"; } void do_ifSelectPre(char const* start, char const* end) { string str(start, end); //if cout << "PUSH(" << str << ')' << endl; cout << "\nPROTECTED SUBSTITUTION\n\n"; } //semantic action function on machine substitution void do_machSubst(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nMACHINE SUBSTITUTION\n\n"; } } //////////////////////////////////////////////////////////////////////////// // // Machine Substitution Grammar // //////////////////////////////////////////////////////////////////////////// // Simple substitution grammar parser with integer values removed struct Substitution : public grammar { template struct definition { definition(Substitution const& ) { machine_subst = ( (simple_subst) | (multi_subst) | (if_select_pre_subst) | (unbounded_choice) )[&do_machSubst] ; unbounded_choice = str_p("ANY") ide_list str_p("WHERE") predicate str_p("THEN") machine_subst str_p("END") ; if_select_pre_subst = ( ( str_p("IF") predicate str_p("THEN") machine_subst *( str_p("ELSIF") predicate machine_subst ) !( str_p("ELSE") machine_subst) str_p("END") ) | ( str_p("SELECT") predicate str_p("THEN") machine_subst *( str_p("WHEN") predicate machine_subst ) !( str_p("ELSE") machine_subst) str_p("END")) | ( str_p("PRE") predicate str_p("THEN") machine_subst str_p("END") ) )[&do_ifSelectPre] ; multi_subst = ( (machine_subst) *( ( str_p("||") (machine_subst) ) | ( str_p("[]") (machine_subst) ) ) ) [&do_mSubst] ; simple_subst = (identifier str_p(":=") arith_expr) [&do_sSubst] ; expression = predicate | arith_expr ; predicate = ( (logic_expr) *( ( ch_p('&') (logic_expr) ) | ( str_p("OR") (logic_expr) ) ) )[&do_predicate] ; logic_expr = ( identifier (str_p("<") arith_expr) | (str_p("<") arith_expr) | (str_p("/:") arith_expr) | (str_p("<:") arith_expr) | (str_p("/<:") arith_expr) | (str_p("<<:") arith_expr) | (str_p("/<<:") arith_expr) | (str_p("<=") arith_expr) | (str_p("=") arith_expr) | (str_p("=") arith_expr) | (str_p("=") arith_expr) ) [&do_logicExpr] ; arith_expr = term *( ('+' term)[&do_add] | ('-' term)[&do_subt] ) ; term = factor ( ('' factor)[&do_mult] | ('/' factor)[&do_div] ) ; factor = lexeme_d[( identifier | +digit_p)[&do_noint]] | '(' expression ')' | ('+' factor) ; ide_list = identifier *( ch_p(',') identifier ) ; identifier = alpha_p +( alnum_p | ch_p('_') ) ; } rule machine_subst, unbounded_choice, if_select_pre_subst, multi_subst, simple_subst, expression, predicate, logic_expr, arith_expr, term, factor, ide_list, identifier; rule<ScannerT> const& start() const { return predicate; //return multi_subst; //return machine_subst; } }; }; //////////////////////////////////////////////////////////////////////////// // // Main program // //////////////////////////////////////////////////////////////////////////// int main() { cout << "*********************************\n\n"; cout << "\t\t...Machine Parser...\n\n"; cout << "*********************************\n\n"; // cout << "Type an expression...or [q or Q] to quit\n\n"; string str; int machineCount = 0; char strFilename[256]; //file name store as a string object do { cout << "Please enter a filename...or [q or Q] to quit:\n\n "; //prompt for file name to be input //char strFilename[256]; //file name store as a string object cin strFilename; if(*strFilename == 'q' || *strFilename == 'Q') //termination condition return 0; ifstream inFile(strFilename); // opens file object for reading //output file for truncated machine (operations only) if (inFile.fail()) cerr << "\nUnable to open file for reading.\n" << endl; inFile.unsetf(std::ios::skipws); Substitution elementary_subst; // Simple substitution parser object string next; while (inFile str) { getline(inFile, next); str += next; if (str.empty() || str[0] == 'q' || str[0] == 'Q') break; parse_info< info = parse(str.c_str(), elementary_subst !end_p, space_p); if (info.full) { cout << "\n-------------------------\n"; cout << "Parsing succeeded\n"; cout << "\n-------------------------\n"; } else { cout << "\n-------------------------\n"; cout << "Parsing failed\n"; cout << "stopped at: " << info.stop << "\"\n"; cout << "\n-------------------------\n"; } } } while ( (*strFilename != 'q' || *strFilename !='Q')); return 0; } However, I am experiencing the following unexpected behaviours on testing: The text files I used are: f1.txt, ... containing ...: debt:=(LoanRequest+outstandingLoan1)*20 . f2.txt, ... containing ...: debt:=(LoanRequest+outstandingLoan1)*20 || newDebt := loanammount-paidammount || price := purchasePrice + overhead + bb . f3.txt, ... containing ...: yy < (xx+7+ww) . f4.txt, ... containing ...: yy < (xx+7+ww) & yy : NAT . When I use multi_subst as start rule both files (f1 and f2) are parsed correctly; When I use machine_subst as start rule file f1 parse correctly, while file f2 fails, producing the error: “Parsing failed stopped at: || newDebt := loanammount-paidammount || price := purchasePrice + overhead + bb” When I use predicate as start symbol, file f3 parse correctly, but file f4 yields the error: “ “Parsing failed stopped at: & yy : NAT” Can anyone help with the grammar, please? It appears there are problems with the grammar that I have so far been unable to spot.

    Read the article

  • Security Pattern to store SSH Keys

    - by Mehdi Sadeghi
    I am writing a simple flask application to submit scientific tasks to remote HPC resources. My application in background talks to remote machines via SSH (because it is widely available on various HPC resources). To be able to maintain this connection in background I need either to use the user's ssh keys on the running machine (when user's have passwordless ssh access to the remote machine) or I have to store user's credentials for the remote machines. I am not sure which path I have to take, should I store remote machine's username/password or should I store user's SSH key pair in database? I want to know what is the correct and safe way to connect to remote servers in background in context of a web application.

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • Creating Wildcard Certificates with makecert.exe

    - by Shawn Cicoria
    Be nice to be able to make wildcard certificates for use in development with makecert – turns out, it’s real easy.  Just ensure that your CN=  is the wildcard string to use. The following sequence generates a CA cert, then the public/private key pair for a wildcard certificate REM make the CA makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx REM now make the server wildcard cert makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx

    Read the article

  • Wear Glasses To Get Better Job

    - by Gopinath
    Here is a simple tip to impress your next interviewer and land into a better job – wear glasses. A new study finds that spectacle-wearers look more professional and more professional. Never mind a crisp shirt or a firm handshake. If you want to impress a potential employer, put on a pair of spectacles. Job hunters are more likely to be hired if they wear glasses to their interview cc image credit:flickr/foshydog This article titled,Wear Glasses To Get Better Job, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Improving performance of fuzzy string matching against a dictionary [closed]

    - by Nathan Harmston
    Hi, So I'm currently working for with using SecondString for fuzzy string matching, where I have a large dictionary to compare to (with each entry in the dictionary has an associated non-unique identifier). I am currently using a hashMap to store this dictionary. When I want to do fuzzy string matching, I first check to see if the string is in the hashMap and then I iterate through all of the other potential keys, calculating the string similarity and storing the k,v pair/s with the highest similarity. Depending on which dictionary I am using this can take a long time ( 12330 - 1800035 entries ). Is there any way to speed this up or make it faster? I am currently writing a memoization function/table as a way of speeding this up, but can anyone else think of a better way to improve the speed of this? Maybe a different structure or something else I'm missing. Many thanks in advance, Nathan

    Read the article

  • Where do you search/look for game developers for an indie game startup?

    - by G.Campos
    Hey there I just recently saw stackoverflow had a game dev sister site so here I am, wondering if you experienced fellows know where one can search/look for game developers for an indie game startup? In other words: I have a game idea which I've written down with as much detail as possible (so anyone else can understand how it works) and now I'm looking for a heavy php programmer with whom to pair up in order to go from idea to reality. I'm a front-end/interface designer and an intermediate programmer. I recognize my project requires heavy programming skills which I do not have as of today =) So, what websites, communities or places do you recommend I go look into? Where do good programmers interested in indie games go look for projects if they don't have their own? Thanks in advance G.Campos

    Read the article

  • Is there a viable alternative to the agile development methodology? [closed]

    - by Eric Wilson
    The two predominant software-development methodologies are waterfall and agile. When discussing these two, there is often much focus on the particular practices that distinguish them (pair programming, TDD, etc. vs. functional spec, big up-front design, etc.) But the real differences are far deeper, in that these practices come from a philosophy. Waterfall says: Change is costly, so it should be minimized. Agile says: Change is inevitable, so make change cheap. My question is, regardless of what you think of TDD or functional specs, is the waterfall development methodology really viable? Does anyone really think that minimizing change in software is a viable option for those that desire to deliver valuable software? Or is the question really about what sort of practices work best in our situations to manage the inevitable change?

    Read the article

  • Do you own your tools?

    - by Mike Brown
    A colleague of mine wrote a post a while ago asking Do you own your tools. It raises an important question. Do you? I answered way down in the comments. As an independent, I do own my tools. Even when I wasn't independent, I had my own (fully licensed) tools that I used for personal development. I don't think buying your own tools are something to puff your chest up about (just because you can buy a $100 pair of basketball sneakers they won't make you as good as Michael Jordan), but it IS an investment in yourself that shouldn't be taken lightly. What do you think good people?

    Read the article

  • Black & White Video with Youtube

    - by RobertPitt
    Hey'a guys, I have an issue with the Youtube Player within Ubuntu, it seems that when playing videos there in black & white, no matter if there normal or HD. I kind of figured out how to prevent this from happening by deleting the cookie PREF that holds a KV Pair like so: PREF f1=50000000&fv=10.2.154 .youtube.com / Sat, 12 Mar 2011 11:38:29 GMT The issue is the Flash Version key, when I Delete this the color comes back, but obviously when I navigate to another page the cookie is set again. Does anyone know why this issue occurs and a possible fix? Using Latest Google Chrome on the latest version of Ubuntu (Installed 3 days ago) Thanks

    Read the article

  • How can a large company foster excellence in its engineers?

    - by Joshiatto
    I am tasked with improving the skills (quality & speed) of engineers in my company. Here are some ideas: Pair Programming TDD Automated Check-in Policies Talks given by experts Awards for coding excellence Encourage competition among engineers to contribute to GitHub Publish standards and practices docs on the intranet site "Gamification" of engineering. Somehow make becoming badasses into a game they will enjoy playing Training Showcase github checkins on screens around the office Add an "engineer of the month" to the intranet home page How can I drive traffic to the intranet home page? What crazy futuristic idea would drive engineers to go to the page every day to see who of their peers are making more money than them (inferred via recognition) and then go off and improve their skills and productivity to see their standings improve on the home page??? Or any ideas just to foster collaboration and love for their jobs so they start taking more pride in their work?? Don't take my ideas as symptomatic of our org. I take full responsibility for not knowing the right way to do this.

    Read the article

  • Toolbox Mod Makes the Wii Ultra Portable

    - by Jason Fitzpatrick
    Given the social nature of most Wii games, modifying a toolbox to serve as a Wii briefcase to make toting it to a friend’s house easy is only fitting. Courtesy of tinker SpicaJames, we find this simple but effective toolbox modification. James originally started his search by investigating getting a Pelican case for his Wii and accessories. When he found the $125 price tag prohibitive (as many of us would for such a side project) he sought out alternatives. A cheap $12 toolbox, a little impact foam, and some handy work with a pair of tin snips to cut out shapes for the Wiimotes, and he had a super cheap and easy to pack and unpack Wii briefcase. Hit up the link below to check out the pictures of his build. Wii Briefcase (translated by Google Translate) [via Hack A Day] Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • What's the most important trait of a programmer desired by peer programmers [closed]

    - by greengit
    Questions here talk about most important programming skills someone is supposed to possess, and, a lot of great answers spring up. The answers and the questioners seem mostly interested on qualities related to getting a better job, nailing some interview, things desired by boss or management, or improving your programming abilities. One thing that often gets blown over is the genuine consideration for what traits peers want. They don't much value things like you're one of those 'gets things done' people for the company, or you never miss a deadline, or your code is least painful to review and debug, or you're team player with fantastic leading abilities. They probably care more if you're helpful, are a refreshing person to talk to, you're fun to program in pair with, everybody wants you to review their code, or something of that nature. I, for one, would care that I come off to my peers as a pleasurable programmer to work with, as much as I care about my impression on my boss or management. Is this really significant, and if yes, what are the most desirable traits peers want from each other.

    Read the article

  • Ensure house map maze with lifts can be solved?

    - by Philipp Lenssen
    In my game we see the floors of a house from the side, and the hero can take lifts -- a lift either goes up (to the next lift upwards), or down (to the next lift downwards), depending on the arrow as shown, and there's always a pair of exactly two lifts connected. That's the only way the hero can move vertically, though he can freely move horizontally. The house map is a randomized 11x5 grid with different items, and unpassable walls to the far left, far right, and sometimes in one of the two middle positions: My question: How can I ensure the map is always randomized yet always solvable and that the hero, starting at the left side of the bottom floor, can always leave it via any upwards-pointing lift at the top floor? For what it's worth I'm using the Lua language for development. Thanks so much!

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-30

    - by Bob Rhubart
    The One Skill All Leaders Should Work On | Scott Edinger blogs.hbr.org Assertiveness, according to HBR blogger Scott Edinger, has the "power to magnify so many other leadership strengths." When Your Influence Is Ineffective | Chris Musselwhite and Tammie Plouffe blogs.hbr.org "Influence becomes ineffective when individuals become so focused on the desired outcome that they fail to fully consider the situation," say Chris Musselwhite and Tammie Plouffe. BPM in Retail Industry | Sanjeev Sharma blogs.oracle.com Sanjeev Sharma shares links to a pair of blog posts that address common BPM use-cases in the Retail industry. Oracle VM: What if you have just 1 HDD system | Yury Velikanov www.pythian.com "To start playing with Oracle VM v3 you need to configure some storage to be used for new VM hosts," says Yury Velikanov. He shows you how in this post. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger Dijkstra

    Read the article

  • Why is ssh-add adding duplicate identity keys?

    - by skyblue
    I have created a private/public SSH key pair using "ssh-keygen". I use this to authenticate to remote servers. However, I find that when I log in, "ssh-add -l" already lists my identity key even though I have not added it! If I use the command "ssh-add" it prompts me for my passphrase and loads my identity key. If I then list my keys with "ssh-add -l", it now shows two identity keys! These are obviously the same as both have the same fingerprint, but their descriptions are different (the automatically is "user@host (RSA)" and the one added using ssh-add is "/home/user/.ssh/id_rsa (RSA)"). So why does my identity key appear to be loaded without any action on my part, and why does ssh-add add the same key again? Obviously something must be doing this automatically, but what is it, and how can I stop it?

    Read the article

  • Why does Ubuntu Download recommend 32-bit install?

    - by Warren Pena
    The Ubuntu desktop download screen has a pair of radio buttons you use to select whether you wish to download the 32-bit or 64-bit version. The 64-bit version is labeled "Not recommended for daily desktop usage." If you have a 64-bit processor, why would you not want to use the 64-bit version of Ubuntu? Update for 10.10: They've removed the "Not recommended" label from the 64-bit version and added a "Recommended" label to the 32-bit version. Update for 11.04: Same as 10.10. Update for 12.04: Still says "Recommended" next to 32-bit version of desktop

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >