Search Results

Search found 355 results on 15 pages for 'tuple'.

Page 12/15 | < Previous Page | 8 9 10 11 12 13 14 15  | Next Page >

  • Python: get windows OS version and architecture

    - by Thorfin
    First of all, I don't think this question is a duplicate of http://stackoverflow.com/questions/2208828/detect-64bit-os-windows-in-python because imho it has not been thoroughly answered. The only approaching answer is: Use sys.getwindowsversion() or the existence of PROGRAMFILES(X86) (if 'PROGRAMFILES(X86)' in os.environ) But: Can we completely rely on the windows environment variable PROGRAMFILES(X86)? I fear that anyone can create it, even if it's not present on the system. How can we use sys.getwindowsversion() to get the architecture? Regarding sys.getwindowsversion(): The link http://docs.python.org/library/sys.html#sys.getwindowsversion leads us to http://msdn.microsoft.com/en-us/library/ms724451%28VS.85%29.aspx but I don't see anything related to the architecture (32bit/64bit). Moreover, the platform element if the returned tuple seems to be independent of the architecture. One last note, I'm using python 2.5. Thanks!

    Read the article

  • getting smallest of coordinates that differ by N or more in Python

    - by user248237
    suppose I have a list of coordinates: data = [[(10, 20), (100, 120), (0, 5), (50, 60)], [(13, 20), (300, 400), (100, 120), (51, 62)]] and I want to take all tuples that either appear in each list in data, or any tuple that differs from all tuples in lists other than its own by 3 or less. How can I do this efficiently in Python? For the above example, the results should be: [[(100, 120), # since it occurs in both lists (10, 20), (13, 20), # since they differ by only 3 (50, 60), (51, 60)]] (0, 5) and (300, 400) would not be included, since they don't appear in both lists and are not different from elements in lists other than their own by 3 or less. how can this be computed? thanks.

    Read the article

  • apply function using expand.grid in R

    - by kolonel
    I have two vectors x and y. I create a grid using the following function: v = expand.grid(x, y) I have a function defined as follows N <- function(a, b , dat){ m = ncol(Filter(function(z) a*max(z)*min(z) < b , dat[1:ncol(dat)])) return(m) } and then I need to maximize N over a grid of x,y: Maximize <- function(x , y ,dat){ v = as.matrix(expand.grid(x,y)) # Here is where I want to map the values of v and get the maximum element and # get the tuple in v that maximized N temp1 <- max(apply(v , 1 , N(v[[1]] , v[[2]] , dat))) } Thanks

    Read the article

  • A 3-D grid of regularly spaced points

    - by Jack
    I want to create a list containing the 3-D coords of a grid of regularly spaced points, each as a 3-element tuple. I'm looking for advice on the most efficient way to do this. In C++ for instance, I simply loop over three nested loops, one for each coordinate. In Matlab, I would probably use the meshgrid function (which would do it in one command). I've read about meshgrid and mgrid in Python, and I've also read that using numpy's broadcasting rules is more efficient. It seems to me that using the zip function in combination with the numpy broadcast rules might be the most efficient way, but zip doesn't seem to be overloaded in numpy.

    Read the article

  • Switching from php to python

    - by ts
    Hello I am trying to make a list of things which can be difficult/surprising to someone who is changing language from PHP to Python. so far i have rather short list: forget require / include, learn import (this was most difficult to me - to understand package - module - class - object hierarchy and its mapping to filesystem) you can't just upload file on server to have webpage (-mod_python, wsgi etc) learn the python way for use variable class names (new $class() vs import + getattr) / operator in python 2.x and all float-related horrors those were difficult to me, it takes few days before mind adapts a new paradigm after i found that there is few other areas which could be challenging for someone with (too) many years of php: everything is an object you have to live with exceptions array vs list, set, dictionary, tuple ... learn (effective) list comprehensions learn generators any other ideas / personal experiences ?

    Read the article

  • How does * work in Python

    - by Deqing
    Just switched from C++ to Python, and found that sometimes it is a little hard to understand ideas behind Python. I guess, a variable is a reference to the real object. For example, a=(1,2,5) meaning a - (1,2,5), so if b=a, then b and a are 2 references pointing to the same (1,2,5). It is a little like pointers in C/C++. If I have: def foo(a,b,c): print a,b,c a=(1,3,5) foo(*a) What does * mean here? Looks like it expands tuple a to a[0], a[1] and a[2]. But why print(*a) is not working while print(a[0],a[1],a[2]) works fine?

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • Python - pickling fails for numpy.void objects

    - by I82Much
    >>> idmapfile = open("idmap", mode="w") >>> pickle.dump(idMap, idmapfile) >>> idmapfile.close() >>> idmapfile = open("idmap") >>> unpickled = pickle.load(idmapfile) >>> unpickled == idMap False idMap[1] {1537: (552, 1, 1537, 17.793827056884766, 3), 1540: (4220, 1, 1540, 19.31205940246582, 3), 1544: (592, 1, 1544, 18.129131317138672, 3), 1675: (529, 1, 1675, 18.347782135009766, 3), 1550: (4048, 1, 1550, 19.31205940246582, 3), 1424: (1528, 1, 1424, 19.744396209716797, 3), 1681: (1265, 1, 1681, 19.596025466918945, 3), 1560: (3457, 1, 1560, 20.530569076538086, 3), 1690: (477, 1, 1690, 17.395542144775391, 3), 1691: (554, 1, 1691, 13.446117401123047, 3), 1436: (3010, 1, 1436, 19.596025466918945, 3), 1434: (3183, 1, 1434, 19.744396209716797, 3), 1441: (3570, 1, 1441, 20.589576721191406, 3), 1435: (476, 1, 1435, 19.640911102294922, 3), 1444: (527, 1, 1444, 17.98480224609375, 3), 1478: (1897, 1, 1478, 19.596025466918945, 3), 1575: (614, 1, 1575, 19.371648788452148, 3), 1586: (2189, 1, 1586, 19.31205940246582, 3), 1716: (3470, 1, 1716, 19.158674240112305, 3), 1590: (2278, 1, 1590, 19.596025466918945, 3), 1463: (991, 1, 1463, 19.31205940246582, 3), 1594: (1890, 1, 1594, 19.596025466918945, 3), 1467: (1087, 1, 1467, 19.31205940246582, 3), 1596: (3759, 1, 1596, 19.744396209716797, 3), 1602: (3011, 1, 1602, 20.530569076538086, 3), 1547: (490, 1, 1547, 17.994071960449219, 3), 1605: (658, 1, 1605, 19.31205940246582, 3), 1606: (1794, 1, 1606, 16.964881896972656, 3), 1719: (1826, 1, 1719, 19.596025466918945, 3), 1617: (583, 1, 1617, 11.894925117492676, 3), 1492: (3441, 1, 1492, 20.500667572021484, 3), 1622: (3215, 1, 1622, 19.31205940246582, 3), 1628: (2761, 1, 1628, 19.744396209716797, 3), 1502: (1563, 1, 1502, 19.596025466918945, 3), 1632: (1108, 1, 1632, 15.457141876220703, 3), 1468: (3779, 1, 1468, 19.596025466918945, 3), 1642: (3970, 1, 1642, 19.744396209716797, 3), 1518: (612, 1, 1518, 18.570245742797852, 3), 1647: (854, 1, 1647, 16.964881896972656, 3), 1650: (2099, 1, 1650, 20.439058303833008, 3), 1651: (540, 1, 1651, 18.552841186523438, 3), 1653: (613, 1, 1653, 19.237197875976563, 3), 1532: (537, 1, 1532, 18.885730743408203, 3)} >>> unpickled[1] {1537: (64880, 1638, 56700, -1.0808743559293829e+18, 152), 1540: (64904, 1638, 0, 0.0, 0), 1544: (54472, 1490, 0, 0.0, 0), 1675: (6464, 1509, 0, 0.0, 0), 1550: (43592, 1510, 0, 0.0, 0), 1424: (43616, 1510, 0, 0.0, 0), 1681: (0, 0, 0, 0.0, 0), 1560: (400, 152, 400, 2.1299736657737219e-43, 0), 1690: (408, 152, 408, 2.7201111331839077e+26, 34), 1435: (424, 152, 61512, 1.0122952080313192e-39, 0), 1436: (400, 152, 400, 20.250289916992188, 3), 1434: (424, 152, 62080, 1.0122952080313192e-39, 0), 1441: (400, 152, 400, 12.250144958496094, 3), 1691: (424, 152, 42608, 15.813941955566406, 3), 1444: (400, 152, 400, 19.625289916992187, 3), 1606: (424, 152, 42432, 5.2947192852601414e-22, 41), 1575: (400, 152, 400, 6.2537390010262572e-36, 0), 1586: (424, 152, 42488, 1.0122601755697111e-39, 0), 1716: (400, 152, 400, 6.2537390010262572e-36, 0), 1590: (424, 152, 64144, 1.0126357235581501e-39, 0), 1463: (400, 152, 400, 6.2537390010262572e-36, 0), 1594: (424, 152, 32672, 17.002994537353516, 3), 1467: (400, 152, 400, 19.750289916992187, 3), 1596: (424, 152, 7176, 1.0124003054161436e-39, 0), 1602: (400, 152, 400, 18.500289916992188, 3), 1547: (424, 152, 7000, 1.0124003054161436e-39, 0), 1605: (400, 152, 400, 20.500289916992188, 3), 1478: (424, 152, 42256, -6.0222748507426518e+30, 222), 1719: (400, 152, 400, 6.2537390010262572e-36, 0), 1617: (424, 152, 16472, 1.0124283313854301e-39, 0), 1492: (400, 152, 400, 6.2537390010262572e-36, 0), 1622: (424, 152, 35304, 1.0123190301052127e-39, 0), 1628: (400, 152, 400, 6.2537390010262572e-36, 0), 1502: (424, 152, 63152, 19.627988815307617, 3), 1632: (400, 152, 400, 19.375289916992188, 3), 1468: (424, 152, 38088, 1.0124213248931084e-39, 0), 1642: (400, 152, 400, 6.2537390010262572e-36, 0), 1518: (424, 152, 63896, 1.0127436235399031e-39, 0), 1647: (400, 152, 400, 6.2537390010262572e-36, 0), 1650: (424, 152, 53424, 16.752857208251953, 3), 1651: (400, 152, 400, 19.250289916992188, 3), 1653: (424, 152, 50624, 1.0126497365427934e-39, 0), 1532: (400, 152, 400, 6.2537390010262572e-36, 0)} The keys come out fine, the values are screwed up. I tried same thing loading file in binary mode; didn't fix the problem. Any idea what I'm doing wrong? Edit: Here's the code with binary. Note that the values are different in the unpickled object. >>> idmapfile = open("idmap", mode="wb") >>> pickle.dump(idMap, idmapfile) >>> idmapfile.close() >>> idmapfile = open("idmap", mode="rb") >>> unpickled = pickle.load(idmapfile) >>> unpickled==idMap False >>> unpickled[1] {1537: (12176, 2281, 56700, -1.0808743559293829e+18, 152), 1540: (0, 0, 15934, 2.7457842047810522e+26, 108), 1544: (400, 152, 400, 4.9518498821046956e+27, 53), 1675: (408, 152, 408, 2.7201111331839077e+26, 34), 1550: (456, 152, 456, -1.1349175514578289e+18, 152), 1424: (432, 152, 432, 4.5939047815653343e-40, 11), 1681: (408, 152, 408, 2.1299736657737219e-43, 0), 1560: (376, 152, 376, 2.1299736657737219e-43, 0), 1690: (376, 152, 376, 2.1299736657737219e-43, 0), 1435: (376, 152, 376, 2.1299736657737219e-43, 0), 1436: (376, 152, 376, 2.1299736657737219e-43, 0), 1434: (376, 152, 376, 2.1299736657737219e-43, 0), 1441: (376, 152, 376, 2.1299736657737219e-43, 0), 1691: (376, 152, 376, 2.1299736657737219e-43, 0), 1444: (376, 152, 376, 2.1299736657737219e-43, 0), 1606: (25784, 2281, 376, -3.2883343074537754e+26, 34), 1575: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1586: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1716: (24240, 2281, 376, -3.0093091599657311e-35, 26), 1590: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1463: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1594: (24240, 2281, 376, -4123208450048.0, 196), 1467: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1596: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1602: (25784, 2281, 376, -5.9963281433905448e+26, 76), 1547: (25784, 2281, 376, -218106240.0, 139), 1605: (25784, 2281, 376, -3.7138649803377281e+27, 56), 1478: (376, 152, 376, 2.1299736657737219e-43, 0), 1719: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1617: (25784, 2281, 376, -1.4411779941597184e+17, 237), 1492: (25784, 2281, 376, 2.8596493694487798e-30, 80), 1622: (25784, 2281, 376, 184686084096.0, 93), 1628: (1336, 152, 1336, 3.1691839245470052e+29, 179), 1502: (1272, 152, 1272, -5.2042207205116645e-17, 99), 1632: (1208, 152, 1208, 2.1299736657737219e-43, 0), 1468: (1144, 152, 1144, 2.1299736657737219e-43, 0), 1642: (1080, 152, 1080, 2.1299736657737219e-43, 0), 1518: (1016, 152, 1016, 4.0240902787680023e+35, 145), 1647: (952, 152, 952, -985172619034624.0, 237), 1650: (888, 152, 888, 12094787289088.0, 66), 1651: (824, 152, 824, 2.1299736657737219e-43, 0), 1653: (760, 152, 760, 0.00018310768064111471, 238), 1532: (696, 152, 696, 8.8978061885676389e+26, 125)} OK I've isolated the problem, but don't know why it's so. First, apparently what I'm pickling are not tuples (though they look like it), but instead numpy.void types. Here is a series to illustrate the problem. first = run0.detections[0] >>> first (1, 19, 1578, 82.637763977050781, 1) >>> type(first) <type 'numpy.void'> >>> firstTuple = tuple(first) >>> theFile = open("pickleTest", "w") >>> pickle.dump(first, theFile) >>> theTupleFile = open("pickleTupleTest", "w") >>> pickle.dump(firstTuple, theTupleFile) >>> theFile.close() >>> theTupleFile.close() >>> first (1, 19, 1578, 82.637763977050781, 1) >>> firstTuple (1, 19, 1578, 82.637764, 1) >>> theFile = open("pickleTest", "r") >>> theTupleFile = open("pickleTupleTest", "r") >>> unpickledTuple = pickle.load(theTupleFile) >>> unpickledVoid = pickle.load(theFile) >>> type(unpickledVoid) <type 'numpy.void'> >>> type(unpickledTuple) <type 'tuple'> >>> unpickledTuple (1, 19, 1578, 82.637764, 1) >>> unpickledTuple == firstTuple True >>> unpickledVoid == first False >>> unpickledVoid (7936, 1705, 56700, -1.0808743559293829e+18, 152) >>> first (1, 19, 1578, 82.637763977050781, 1)

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Logging in worker threads spawned from a pylons application does not seem to work

    - by TimM
    I have a pylons application where, under certain cirumstances I want to spawn multiple worker threads to process items in a queue. Right now we aren't making use of a ThreadPool (would be ideal, but we'll add that in later). The main problem is that the worker threads logging does not get written to the log files. When I run the code outside of the pylons application the logging works fine. So I think its something to do with the pylons log handler but not sure what. Here is a basic example of the code (trimmed down): import logging log = logging.getLogger(__name__) import sys from Queue import Queue from threading import Thread, activeCount def run(input, worker, args = None, simulteneousWorkerLimit = None): queue = Queue() threads = [] if args is not None: if len(args) > 0: args = list(args) args = [worker, queue] + args args = tuple(args) else: args = (worker, queue) # start threads for i in range(4): t = Thread(target = __thread, args = args) t.daemon = True t.start() threads.append(t) # add ThreadTermSignal inputData = list(input) inputData.extend([ThreadTermSignal] * 4) # put in the queue for data in inputData: queue.put(data) # block until all contents are downloaded queue.join() log.critical("** A log line that appears fine **") del queue for thread in threads: del thread del threads class ThreadTermSignal(object): pass def __thread(worker, queue, *args): try: while True: data = queue.get() if data is ThreadTermSignal: sys.exit() try: log.critical("** I don't appear when run under pylons **") finally: queue.task_done() except SystemExit: queue.task_done() pass Take note, that the log lin within the RUN method will show up in the log files, but the log line within the worker method (which is run in a spawned thread), does not appear. Any help would be appreciated. Thanks ** EDIT: I should mention that I tried passing in the "log" variable to the worker thread as well as redefining a new "log" variable within the thread and neither worked. ** EDIT: Adding the configuration used for the pylons application (which comes out of the INI file). So the snippet below is from the INI file. [loggers] keys = root [handlers] keys = wsgierrors [formatters] keys = generic [logger_root] level = WARNING handlers = wsgierrors [handler_console] class = StreamHandler args = (sys.stderr,) level = WARNING formatter = generic [handler_wsgierrors] class = pylons.log.WSGIErrorsHandler args = () level = WARNING format = generic

    Read the article

  • HOW TO: Draggable legend in matplotlib

    - by Adam Fraser
    QUESTION: I'm drawing a legend on an axes object in matplotlib but the default positioning which claims to place it in a smart place doesn't seem to work. Ideally, I'd like to have the legend be draggable by the user. How can this be done? SOLUTION: Well, I found bits and pieces of the solution scattered among mailing lists. I've come up with a nice modular chunk of code that you can drop in and use... here it is: class DraggableLegend: def __init__(self, legend): self.legend = legend self.gotLegend = False legend.figure.canvas.mpl_connect('motion_notify_event', self.on_motion) legend.figure.canvas.mpl_connect('pick_event', self.on_pick) legend.figure.canvas.mpl_connect('button_release_event', self.on_release) legend.set_picker(self.my_legend_picker) def on_motion(self, evt): if self.gotLegend: dx = evt.x - self.mouse_x dy = evt.y - self.mouse_y loc_in_canvas = self.legend_x + dx, self.legend_y + dy loc_in_norm_axes = self.legend.parent.transAxes.inverted().transform_point(loc_in_canvas) self.legend._loc = tuple(loc_in_norm_axes) self.legend.figure.canvas.draw() def my_legend_picker(self, legend, evt): return self.legend.legendPatch.contains(evt) def on_pick(self, evt): if evt.artist == self.legend: bbox = self.legend.get_window_extent() self.mouse_x = evt.mouseevent.x self.mouse_y = evt.mouseevent.y self.legend_x = bbox.xmin self.legend_y = bbox.ymin self.gotLegend = 1 def on_release(self, event): if self.gotLegend: self.gotLegend = False ...and in your code... def draw(self): ax = self.figure.add_subplot(111) scatter = ax.scatter(np.random.randn(100), np.random.randn(100)) legend = DraggableLegend(ax.legend()) I emailed the Matplotlib-users group and John Hunter was kind enough to add my solution it to SVN HEAD. On Thu, Jan 28, 2010 at 3:02 PM, Adam Fraser wrote: I thought I'd share a solution to the draggable legend problem since it took me forever to assimilate all the scattered knowledge on the mailing lists... Cool -- nice example. I added the code to legend.py. Now you can do leg = ax.legend() leg.draggable() to enable draggable mode. You can repeatedly call this func to toggle the draggable state. I hope this is helpful to people working with matplotlib.

    Read the article

  • Preserving Language across inline Calculated Members in SSAS

    - by Tullo
    Problem: I need to retrieve the language of a given cell from the cube. The cell is defined by code-generated MDX, which can have an arbitrary level of indirection as far as calculated members and sets go (defined in the WITH clause). SSAS appears to ignore the Language of the specified members when you declare a calculated member inline in the query. Example: The cube's default locale is 1033 (en-US) The cube contains a Calculated Measure called [Net Pounds] which is defined as [Net Amt], language=2057 (en-GB) The query requests this measure alongside an inline calculated measure which is simply an alias to the [Net Pounds] When used directly, the measure is formatted in the en-GB locale, but when aliased, the measure falls back to using the cube default of en-US. Here's what the query looks like: WITH MEMBER [Measures].[Pounds Indirect] AS [Measures].[Net Pounds] SELECT { [Measures].[Pounds Indirect], [Measures].[Net Pounds] } ON AXIS (0) FROM [Cube] CELL PROPERTIES language, value, formatted_value The query returns the expected two cells, but only uses the [Net Pounds] locale when used directly. Is there an option or switch somewhere in SSAS that will allow locale information to be visible in calculated members? I realise that it is possible to declare the inline calculated member in a particular locale, but this would involve extracting the locale from the tuple first, which (since the cube's member is isolated in the application's query schema) is unknown.

    Read the article

  • python manage.py runserver fails

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCALE_PATH atribute and set to an empty tuple to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • Migration for creating and deleting model in South

    - by Almad
    I've created a model and created initial migration for it: db.create_table('tvguide_tvguide', ( ('id', models.AutoField(primary_key=True)), ('date', models.DateField(_('Date'), auto_now=True, db_index=True)), )) db.send_create_signal('tvguide', ['TVGuide']) models = { 'tvguide.tvguide': { 'channels': ('models.ManyToManyField', ["orm['tvguide.Channel']"], {'through': "'ChannelInTVGuide'"}), 'date': ('models.DateField', ["_('Date')"], {'auto_now': 'True', 'db_index': 'True'}), 'id': ('models.AutoField', [], {'primary_key': 'True'}) } } complete_apps = ['tvguide'] Now, I'd like to drop it: db.drop_table('tvguide_tvguide') However, I have also deleted corresponding model. South (at least 0.6.2) is however trying to access it: (venv)[almad@eva-03 project]$ ./manage.py migrate tvguide Running migrations for tvguide: - Migrating forwards to 0002_removemodels. > tvguide: 0001_initial Traceback (most recent call last): File "./manage.py", line 27, in <module> execute_from_command_line() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/management/commands/migrate.py", line 91, in handle skip = skip, File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 581, in migrate_app result = run_forwards(mapp, [mname], fake=fake, db_dry_run=db_dry_run, verbosity=verbosity) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 388, in run_forwards verbosity = verbosity, File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 287, in run_migrations orm = klass.orm File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 62, in __get__ self.orm = FakeORM(*self._args) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 45, in FakeORM _orm_cache[args] = _FakeORM(*args) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 106, in __init__ self.models[name] = self.make_model(app_name, model_name, data) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 307, in make_model tuple(map(ask_for_it_by_name, bases)), File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/utils.py", line 23, in ask_for_it_by_name ask_for_it_by_name.cache[name] = _ask_for_it_by_name(name) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/utils.py", line 17, in _ask_for_it_by_name return getattr(module, bits[-1]) AttributeError: 'module' object has no attribute 'TVGuide' Is there a way around?

    Read the article

  • slicing a 2d numpy array

    - by MedicalMath
    The following code: import numpy as p myarr=[[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6]] copy=p.array(myarr) p.mean(copy)[:,1] Is generating the following error message: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> p.mean(copy)[:,1] IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index I looked up the syntax at this link and I seem to be using the correct syntax to slice. However, when I type copy[:,1] into the Python shell, it gives me the following output, which is clearly wrong, and is probably what is throwing the error: array([1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6]) Can anyone show me how to fix my code so that I can extract the second column and then take the mean of the second column as intended in the original code above? EDIT: Thank you for your solutions. However, my posting was an oversimplification of my real problem. I used your solutions in my real code, and got a new error. Here is my real code with one of your solutions that I tried: filteredSignalArray=p.array(filteredSignalArray) logical=p.logical_and(EndTime-10.0<=matchingTimeArray,matchingTimeArray<=EndTime) finalStageTime=matchingTimeArray.compress(logical) finalStageFiltered=filteredSignalArray.compress(logical) for j in range(len(finalStageTime)): if j == 0: outputArray=[[finalStageTime[j],finalStageFiltered[j]]] else: outputArray+=[[finalStageTime[j],finalStageFiltered[j]]] print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() And here is the error message that is now being generated by the new code: File "mypath\myscript.py", line 1545, in WriteToOutput10SecondsBeforeTimeMarker print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() TypeError: list indices must be integers, not tuple Second EDIT: This is solved now that I added: outputArray=p.array(outputArray) above my code. I have been at this too many hours and need to take a break for a while if I am making these kinds of mistakes.

    Read the article

  • Using Sendkeys in python to press {F12} results in other keys pressed?

    - by ThantiK
    import time from ctypes import * import win32gui import win32com.client as comclt X = 119 Y = 53 def PILColorToRGB(pil_color): """ convert a PIL-compatible integer into an (r, g, b) tuple """ hexstr = '%06x' % pil_color # reverse byte order r, g, b = hexstr[4:], hexstr[2:4], hexstr[:2] r, g, b = [int(n, 16) for n in (r, g, b)] return (r, g, b) wsh = comclt.Dispatch("WScript.Shell") w = win32gui user = windll.LoadLibrary("c:\\windows\\system32\\user32.dll") h = user.GetDC(0) gdi = windll.LoadLibrary("c:\\windows\\system32\\gdi32.dll") while True: FG = w.GetWindowText(w.GetForegroundWindow()) #FG = Foreground window title. if FG == "World of Warcraft": rgb = (PILColorToRGB(gdi.GetPixel(h,X,Y))) #X, Y time.sleep(0.333) #don't check too often. if (rgb[0] >= 130): #While Pixel (X, Y) is Red... #print "%d %d %d" % (rgb[0], rgb[1], rgb[2]) #Debug wsh.SendKeys("{F12}") #Send a key. time.sleep(0.7) #Add some extra down-time if we send the key. else: time.sleep(5) Basically all this code does is read a pixel on the screen, and send a key (F12) if the pixel is red. But when using this code I regularly get some phantom key-code being pressed. The application I'm using this on is obviously world of warcraft, and I have checked that all keybinds are standard keybinds. However randomly it seems I get either an up arrow, or a w pressed, which moves my character forward whenever this code executes (F12 is bound to a macro, unbound from any movement. If I press f12 with a hardware event, it does not exhibit this behavior. What in the world could be going on here?

    Read the article

  • Linq to find pair of points with longest length?

    - by Chris
    I have the following code: foreach (Tuple<Point, Point> pair in pointsCollection) { var points = new List<Point>() { pair.Value1, pair.Value2 }; } Within this foreach, I would like to be able to determine which pair of points has the most significant length between the coordinates for each point within the pair. So, let's say that points are made up of the following pairs: (1) var points = new List<Point>() { new Point(0,100), new Point(100,100) }; (2) var points = new List<Point>() { new Point(150,100), new Point(200,100) }; So I have two sets of pairs, mentioned above. They both will plot a horizontal line. I am interested in knowing what the best approach would be to find the pair of points that have the greatest distance between, them, whether it is vertically or horizontally. In the two examples above, the first pair of points has a difference of 100 between the X coordinate, so that would be the point with the most significant difference. But if I have a collection of pairs of points, where some points will plot a vertical line, some points will plot a horizontal line, what would be the best approach for retrieving the pair from the set of points whose difference, again vertically or horizontally, is the greatest among all of the points in the collection? Thanks! Chris

    Read the article

  • What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

    - by Tom
    I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's an "obvious" list of such analogies, off the top of my head: program/definition | proof type/declaration | proposition inhabited type | theorem function | implication function argument | hypothesis/antecedent function result | conclusion/consequent function application | modus ponens recursion | induction identity function | tautology non-terminating function | absurdity tuple | conjunction (and) disjoint union | exclusive disjunction (xor) parametric polymorphism | universal quantification So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list. For example, here are some programming notions for which I'm unaware of pithy names in logic: currying | "((a & b) => c) iff (a => (b => c))" scope | "known theory + hypotheses" And here are some logical concepts which I haven't quite pinned down in programming terms: primitive type? | axiom set of valid programs? | theory ? | disjunction (or)

    Read the article

  • MYSQL Merging 2 results into 1 table

    - by AlphaRomeo69
    Im building a c# program and am currently stuck at fetching data from MYSQL database and bind them to a grid view. I had been researching for a few days now but to no avail. I have 4 table in the database. table 1 - alpha table 2 - bravo table 3 - charlie table 4 - delta attributes of alpha (id, type, user, role ) attributes of bravo (id, type, date, user) attributes of charlie (id,type, cat, doneby, comment) atttibutes of delta (id,type, cat, doneby) * the pk of alpha and bravo is (id) * the pk of charlie and delta is (id, type) i did a query1 before by inner joinning alpha, bravo and charlie which leads to the sucessful result of (id, type, date, user, role, cat, doneby, comment) and i also did a query2 before by inner joinning alpha, bravo and delta which leads to the sucessful result of (id, type, date, user, role, cat, doneby) Right now, im trying to built a query3 which will merge the result from query1 and query2 together. the result of my attempts leads to (id, type, date, user, role, cat, doneby, comment,id, type, date, user, role, cat, doneby) As i do not want the repeated columns, I would like to seek advice on how to get the result to become like the one below by placing the records as a new tuple in the result table. (id, type, date, user, role, cat, doneby, comment) Thanks! P.S: the PK would not pose a problem due to (id, type)

    Read the article

  • How make this special many2many fields validation using Django ORM?

    - by e-satis
    I have the folowing model: class Step(models.Model): order = models.IntegerField() latitude = models.FloatField() longitude = models.FloatField() date = DateField(blank=True, null=True) class Journey(models.Model): boat = models.ForeignKey(Boat) route = models.ManyToManyField(Step) departure = models.ForeignKey(Step, related_name="departure_of", null=True) arrival = models.ForeignKey(Step, related_name="arrival_of", null=True) I would like to implement the following check: # If a there is less than one step, raises ValidationError. routes = tuple(self.route.order_by("date")) if len(routes) <= 1: raise ValidationError("There must be at least two setps in the route") # save the first and the last step as departure and arrival self.departure = routes[0] self.arrival = routes[-1] # departure and arrival must at least have a date if not (self.departure.date or self.arrival.date): raise ValidationError("There must be an departure and an arrival date. " "Please set the date field for the first and last Step of the Journey") # departure must occurs before arrival if not (self.departure.date > self.arrival.date): raise ValidationError("Departure must take place the same day or any date before arrival. " "Please set accordingly the date field for the first and last Step of the Journey") I tried to do that by overloading save(). Unfortunately, Journey.route is empty in save(). What's more, Journey.id doesn't exists yet. I didn't try django.db.models.signals.post_save but suppose it will fail because Journey.route is empty as well (when does this get filled anyway?). I see a solution in django.db.models.signals.m2m_changed but there are a lot of steps (thousands), and I want to avoid to perform an operation for every single of them.

    Read the article

  • Javascript with Django?

    - by Rosarch
    I know this has been asked before, but I'm having a hard time setting up JS on my Django web app, even though I'm reading the documentation. I'm running the Django dev server. My file structure looks like this: mysite/ __init__.py MySiteDB manage.py settings.py urls.py myapp/ __init__.py admin.py models.py test.py views.py templates/ index.html Where do I want to put the Javascript and CSS? I've tried it in a bunch of places, including myapp/, templates/ and mysite/, but none seem to work. From index.html: <head> <title>Degree Planner</title> <script type="text/javascript" src="/scripts/JQuery.js"></script> <script type="text/javascript" src="/media/scripts/sprintf.js"></script> <script type="text/javascript" src="/media/scripts/clientside.js"></script> </head> From urls.py: (r'^admin/', include(admin.site.urls)), (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': 'media'}) (r'^.*', 'mysite.myapp.views.index'), I suspect that the serve() line is the cause of errors like: TypeError at /admin/auth/ 'tuple' object is not callable Just to round off the rampant flailing, I changed these settings in settings.py: MEDIA_ROOT = '/media/' MEDIA_URL = 'http://127.0.0.1:8000/media'

    Read the article

  • Python many-to-one mapping (creating equivalence classes)

    - by Adam Matan
    Hi, I have a project of converting one database to another. One of the original database columns defines the row's category. This coulmn should be mapepd to a new category in the new databse. For example, let's assume the original categories are:parrot, spam, cheese_shop, Cleese, Gilliam, Palin Now that's a little verbose for me, And I want to have these rows categorized as sketch, actor - That is, define all the sketches and all the actors as two equivalence classes. >>> monty={'parrot':'sketch', 'spam':'sketch', 'cheese_shop':'sketch', 'Cleese':'actor', 'Gilliam':'actor', 'Palin':'actor'} >>> monty {'Gilliam': 'actor', 'Cleese': 'actor', 'parrot': 'sketch', 'spam': 'sketch', 'Palin': 'actor', 'cheese_shop': 'sketch'} That's quite awkward- I would prefer having something like: monty={ ('parrot','spam','cheese_shop'): 'sketch', ('Cleese', 'Gilliam', 'Palin') : 'actors'} But this, of course, sets the entire tuple as a key: >>> monty['parrot'] Traceback (most recent call last): File "<pyshell#29>", line 1, in <module> monty['parrot'] KeyError: 'parrot' Any ideas how to create an elegant many-to-one dictionary in Python? Thanks, Adam

    Read the article

  • How can I make a Maybe-Transformer MaybeT into an instance of MonadWriter?

    - by martingw
    I am trying to build a MaybeT-Transformer Monad, based on the example in the Real World Haskell, Chapter Monad Transformers: data MaybeT m a = MaybeT { runMT :: m (Maybe a) } instance (Monad m) => Monad (MaybeT m) where m >>= f = MaybeT $ do a <- runMT m case a of Just x -> runMT (f x) Nothing -> return Nothing return a = MaybeT $ return (Just a) instance MonadTrans MaybeT where lift m = MaybeT $ do a <- m return (Just a) This works fine, but now I want to make MaybeT an instance of MonadWriter: instance (MonadWriter w m) => MonadWriter w (MaybeT m) where tell = lift . tell listen m = MaybeT $ do unwrapped <- listen (runMT m) return (Just unwrapped) The tell is ok, but I can't get the listen function right. The best I could come up with after 1 1/2 days of constructor origami is the one you see above: unwrapped is supposed to be a tuple of (Maybe a, w), and that I want to wrap up in a Maybe-Type and put the whole thing in an empty MonadWriter. But the compiler complains with: Occurs check: cannot construct the infinite type: a = Maybe a When generalising the type(s) for `listen' In the instance declaration for `MonadWriter w (MaybeT m)' What am I missing?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15  | Next Page >