Search Results

Search found 19305 results on 773 pages for 'above the gods'.

Page 68/773 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • A crowded Extra-Solar system

    - by TATWORTH
    The orbiting Kepler telescope has found another unusual alien solar system. The Kepler telescope monitors star for changes in their brightness. The light resulting curves can be seen at http://www.planethunters.org.Recently an extra-solar system with 4 stars (planets orbiting two of the stars with the other two stars orbiting as a distant binary pair) was discovered using by two "arm-chair" astronomers using the above web site. Source SPACE.com: All about our solar system, outer space and exploration

    Read the article

  • What is the keyboard shortcut to minimise a window to launcher in unity?

    - by uzi3k
    In 10.10 that I was using before 12.04 you could use alt+F9 to minimise a window to the task bar. In 12.04 meta+ctrl+ cursor up down maximises and unmaximises a window. If you have a numeric keypad you can use ctrl+alt+0 to minimise to launcher. On my netbook I don't have a numeric keypad and the normal numbers do not work with the above shortcut. How can I minimise windows with a keyboard shortcut?

    Read the article

  • Should you promise to deliver a feature that you aren't sure if its implementable?

    - by user476
    In an article from HN, I came across the following advice: Always tell your customer/user "yes", even if you're not sure. 90% of the time, you'll find a way to do it. 10% of the time, you'll go back and apologize. Small price to pay for major personal growth But I've always thought that one should do a feasibility analysis before making any kind of promises to a customer/user, so that they aren't misled at any point. At what circumstances, then, should the above advice applicable?

    Read the article

  • How to Work With an SEO Company

    Is your website not attracting the number of visitors that it should? Are you sure whether it has been properly optimized for the search engines? Do searches of keywords that are relevant to your website show up your website in the top list of search results? If you have answered in the negative to any of the above queries, then it is high time you had a discussion with the representative of an organization that specializes in search engine optimization.

    Read the article

  • Why SEO is Important For Your Website

    Usually there is a perception that it is gratifying to see the website in search engine first results when you enter the company's website name (also known as domain name). In fact, the above is completely pointless and has no value.

    Read the article

  • wrong operator() overload called

    - by user313202
    okay, I am writing a matrix class and have overloaded the function call operator twice. The core of the matrix is a 2D double array. I am using the MinGW GCC compiler called from a windows console. the first overload is meant to return a double from the array (for viewing an element). the second overload is meant to return a reference to a location in the array (for changing the data in that location. double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element I am writing a testing routine and have discovered that the "viewing" overload never gets called. for some reason the compiler "defaults" to calling the overload that returns a reference when the following printf() statement is used. fprintf(outp, "%6.2f\t", testMatD(i,j)); I understand that I'm insulting the gods by writing my own matrix class without using vectors and testing with C I/O functions. I will be punished thoroughly in the afterlife, no need to do it here. Ultimately I'd like to know what is going on here and how to fix it. I'd prefer to use the cleaner looking operator overloads rather than member functions. Any ideas? -Cal the matrix class: irrelevant code omitted class Matrix { public: double getElement(int row, int col)const; //returns the element at row,col //operator overloads double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element private: //data members double **array; //pointer to data array }; double Matrix::getElement(int row, int col)const{ //transform indices into true coordinates (from sorted coordinates //only row needs to be transformed (user can only sort by row) row = sortedArray[row]; result = array[usrZeroRow+row][usrZeroCol+col]; return result; } //operator overloads double Matrix::operator()(int row, int col) const { //this overload is used when viewing an element return getElement(row,col); } double &Matrix::operator()(int row, int col){ //this overload is used when placing an element return array[row+usrZeroRow][col+usrZeroCol]; } The testing program: irrelevant code omitted int main(void){ FILE *outp; outp = fopen("test_output.txt", "w+"); Matrix testMatD(5,7); //construct 5x7 matrix //some initializations omitted fprintf(outp, "%6.2f\t", testMatD(i,j)); //calls the wrong overload }

    Read the article

  • Returning JSON in CFFunction and appending it to layer is causing an error

    - by Mel
    I'm using the qTip jQuery plugin to generate a dynamic tooltip. I'm getting an error in my JS, and I'm unsure if its source is the JSON or the JS. The tooltip calls the following function: (sorry about all this code, but it's necessary) <cffunction name="fGameDetails" access="remote" returnType="any" returnformat="JSON" output="false" hint="This grabs game details for the games.cfm page"> <!---Argument, which is the game ID---> <cfargument name="gameID" type="numeric" required="true" hint="CFC will look for GameID and retrieve its details"> <!---Local var---> <cfset var qGameDetails = ""> <!---Database query---> <cfquery name="qGameDetails" datasource="#REQUEST.datasource#"> SELECT titles.titleName AS tName, titles.titleBrief AS tBrief, games.gameID, games.titleID, games.releaseDate AS rDate, genres.genreName AS gName, platforms.platformAbbr AS pAbbr, platforms.platformName AS pName, creviews.cReviewScore AS rScore, ratings.ratingName AS rName FROM games Inner Join platforms ON platforms.platformID = games.platformID Inner Join titles ON titles.titleID = games.titleID Inner Join genres ON genres.genreID = games.genreID Inner Join creviews ON games.gameID = creviews.gameID Inner Join ratings ON ratings.ratingID = games.ratingID WHERE (games.gameID = #ARGUMENTS.gameID#); </cfquery> <cfreturn qGameDetails> </cffunction> This function returns the following JSON: { "COLUMNS": [ "TNAME", "TBRIEF", "GAMEID", "TITLEID", "RDATE", "GNAME", "PABBR", "PNAME", "RSCORE", "RNAME" ], "DATA": [ [ "Dark Void", "Ancient gods known as 'The Watchers,' once banished from our world by superhuman Adepts, have returned with a vengeance.", 154, 54, "January, 19 2010 00:00:00", "Action & Adventure", "PS3", "Playstation 3", 3.3, "14 Anos" ] ] } The problem I'm having is every time I try to append the JSON to the layer #catalog, I get a syntax error that says "missing parenthetical." This is the JavaScript I'm using: $(document).ready(function() { $('#catalog a[href]').each(function() { $(this).qtip( { content: { url: '/gamezilla/resources/components/viewgames.cfc?method=fGameDetails', data: { gameID: $(this).attr('href').match(/gameID=([0-9]+)$/)[1] }, method: 'get' }, api: { beforeContentUpdate: function(content) { var json = eval('(' + content + ')'); content = $('<div />').append( $('<h1 />', { html: json.TNAME })); return content; } }, style: { width: 300, height: 300, padding: 0, name: 'light', tip: { corner: 'leftMiddle', size: { x: 40, y : 40 } } }, position: { corner: { target: 'rightMiddle', tooltip: 'leftMiddle' } } }); }); }); Any ideas where I'm going wrong? I tried many things for several days and I can't find the issue. Many thanks!

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • subneting .. is 192.168.0.1/24 different than 192.168.0.1/25

    - by SM
    So i am trying to understand subnetting. I understand that you can take 192.168.0.0/24 domain and break it up in 2 subnets. 192.168.0.0/25 and 192.168.0.128/25 .. so what if i need to create 2 more subnets inside both of those above subnets. What if i need to create 2 subnets, which have 2 more subnets inside them and each of those 2 subnets have 2 hosts. As per my calculation, this would be something like. top 2 subnets: 192.168.0.0/25 and 192.168.0.128/25 subnets of 192.168.0.0/25: 192.168.0.0/26 - 192.168.0.128/26 subnets of 192.168.0.128/25: 192.168.0.128/26 - 192.168.0.192/26 hosts of 192.168.0.0/25: 192.168.0.1/25 - 192.168.0.2/25 hosts of 192.168.0.128/25: 192.168.0.129/25 - 192.168.0.130/25 hosts of 192.168.0.0/26: 192.168.0.1/26 - 192.168.0.2/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.192/26: 192.168.0.193/26 - 192.168.0.194/26 The above doesnt seem right, i am moving down like a tree, however the subnet mask and IPs are being repeated, i am just add 1 to /x and making the ips from there. Can anyone please tell me if this is correct The scenario i am trying to understand is similar to this, however i just want to add hosts on each level as well. http://www.ciscotrick.com/wp-content/uploads/2008/08/the_mathematical_relationship_between_network_block_sizes.jpg

    Read the article

  • Outlook unable to synchronize SharePoint library - error 0x80004005

    - by DLux
    We have one large library (~10 GB) on SharePoint that cannot be synchronized with Outlook, even if you only attempt to synch one of the smaller sub folders in the library. Other libraries (or other library sub folders) work fine with Outlook. This is with MOSS 2007 SP1 and Outlook 2007 SP2. The error is: Task 'SharePoint' reported error (0x80004005): 'An error occurred either in Outlook or SharePoint. Contact the SharePoint site administrator.' Reproducing the error Open up the large SharePoint document library in Internet Explorer From the Actions menu, select Connect to Outlook Select Allow on the stssync: security warning that pops up Outlook automatically tries an initial sync and sync status immediately shows the above error. Update 1: I verified the same issue occurs on Windows XP SP3 with IE 6 using Outlook 2007 SP2 and the same SharePoint library (it was originally tested on Windows 7). The issue is definitely related to the library or Outlook. Update 2: Using stsadm I exported the site with this large document library (8.6 GB 15,000 items) and imported it on to a development system. The result is the same on the development system - multiple clients are unable to connect Outlook to the library and get the 0x80004005 error above.

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • Having trouble redirecting frevvo using mod_proxy

    - by user38859
    This question is similar to this: http://serverfault.com/questions/102868/how-to-access-webservers-running-on-ports-blocked-on-companys-network Basically, I'm using confluence and a plugin called frevvo. Confluence sits on port 8080 while frevvo sits on port 8082. I want to redirect both of them to port 80 via Apache HTTP web server so that it doesn't get blocked by company proxies. I've been using the document on Atlassian that shows me how to run confluence behind Apache (I can't post a second URL due to being a newbie here) I've successfully redirected Confluence from port 8080 to port 80 so I can now access Confluence using www.example.com/confluence. Now I tried doing the same thing to frevvo with the following configurations: Apache httpd: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /confluence http://localhost:8080/confluence ProxyPassReverse /confluence http://localhost:8080/confluence <Location /confluence> Order allow,deny Allow from all </Location> ProxyPass /frevvo http://localhost:8082/ ProxyPassReverse /frevvo http://localhost:8082/ <Location /forms> Order allow,deny Allow from all </Location> And in server.xml for the frevvo Tomcat instance, I added the following within <Host> tag: <Context path=" " docBase="" debug="0" reloadable="false"> <!-- Logger is deprecated in Tomcat 5.5. Logging configuration for Confluence is specified in confluence/WEB-INF/classes/log4j.properties --> <Manager pathname="" /> </Context> The plugin, frevvo, when accessed through the browser using http://localhost:8082 usually redirect to http://localhost:8082/frevvo/web With the above configuration, when accessing www.example.com.au/frevvo redirects to www.example.com/frevvo/web/static/login - which doesn't work. I hope the above details is clear and appreciate anyone who could give us some insight.

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • Intel Wireless 4965AGN not achieving N throughput when connected to an Airport Express N network

    - by BenA
    I have an Intel Wireless WiFi Link 4965AGN adaptor in my laptop (HP Pavillion dv2000 series) which is connecting to a 5Ghz-only 802.11n network provided by an Apple Airport Express. The network is using WPA2 encryption. My desktop is also connected the Airport, via a Linksys WUSB600N USB adaptor. Both are running with the latest drivers, and the Airport is running the latest firmware. The Airport is also configured to use wide channels. The problem I have is that I never get throughput above 4MB/s when transferring files between the two machines. Even a pessimistic calculation shows a 270Mbps network as being capable of transfer rates at well above 10MB/s. I'm pretty sure I've isolated the issue to being the Intel adaptor, as wiring the desktop to the AP, and using the Linksys adaptor on the laptop immediately yielded speeds limited by the 100MB/s ethernet connection. I know that 802.11n is still a draft standard, and so mixing kit from different manufacturers can easily lead to poor results, but I was just wondering if anybody else out there has had success with this Intel adaptor on an N network? Or even better, connecting it to an Airport Express? Can anybody give me any advice on how to troubleshoot this issue? I should also mention that the Airport Express doesn't allow you to manually specify channels when running in N mode, and that I've been able to rule out interference from other Wireless LANs by scanning. There aren't any other 5GHz networks in my area. All ideas welcome! Update: A while later, I've just updated to the most recent drivers for both the Intel chip in the laptop, and the USB adaptor. Unfortunately this hasn't improved things :(. If anybody has any advice it would be be gratefully received.

    Read the article

  • Splitting a raidctl mirror safely

    - by milkfilk
    I have a Sun T5220 server with the onboard LSI card and two disks that were in a RAID 1 mirror. The data is not important right now but we had a failed disk and are trying to understand how to do this for real if we had to recover from a failure. The initial situation looked like this: # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size Level Disk ---------------------------------------------------------------- c1t0d0 136.6G N/A DEGRADED OFF RAID1 0.1.0 136.6G GOOD N/A 136.6G FAILED Green light on the 0.0.0 disk. Find / lights up the 0.1.0 disk. So I know I have a bad drive and which one it is. Server still boots obviously. First, we tried putting a new disk in. This disk came from an unknown source. Format would not see it, cfgadm -al would not see it so raidctl -l would not see it. I figure it's bad. We tried another disk from another spare server: # raidctl -c c1t1d0 c1t0d0 (where t1 is my good disk - 0.1.0) Disk has occupied space. Also the different syntax options don't change anything: # raidctl -C "0.1.0 0.0.0" -r 1 1 Disk has occupied space. # raidctl -C "0.1.0 0.0.0" 1 Disk has occupied space. Ok. Maybe this is because the disk from the spare server had a RAID 1 on it already. Aha, I can see another volume in raidctl: # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Volume:c1t132d0 (this is the foreign root mirror) Disk: 0.0.0 Disk: 0.1.0 ... No problem. I don't care about the data, I'll just delete the foreign mirror. # raidctl -d c1t132d0 (warning about data deletion but it works) At this point, /usr/bin/ binaries freak out. By that I mean, ls -l /usr/bin/which shows 1.4k but cat /usr/bin/which gives me a newline. Great, I just blew away the binaries (ie: binaries in mem still work)? I bounce the box. It all comes back fine. WTF. Anyway, back to recreating my mirror. # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Disk: 0.0.0 Disk: 0.1.0 ... Man says that you can delete a mirror and it will split it. Ok, I'll delete the root mirror. # raidctl -d c1t0d0 Array in use. (this might not be the exact error) I googled this and found of course you can't do this (even with -f) while booted off the mirror. Ok. I boot cdrom -s and deleted the volume. Now I have one disk that has a type of "LSI-Logical-Volume" on c1t1d0 (where my data is) and a brand new "Hitachi 146GB" on c1t0d0 (what I'm trying to mirror to): (booted off the CD) # raidctl -c c1t1d0 c1t0d0 (man says it's source destination for mirroring) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" -r 1 1 (alt syntax per man) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" 1 (assumes raid1, no help) Illegal Array Layout. Same size disks, same manufacturer but I did delete the volume instead of throwing in a blank disk and waiting for it to resync. Maybe this was a critical error. I tried selecting the type in format for my good disk to be a plain 146gb disk but it resets the partition table which I'm pretty sure would wipe the data (bad if this was production). Am I boned? Anyone have experience with breaking and resyncing a mirror? There's nothing on Google about "Illegal Array Layout" so here's my contrib to the search gods.

    Read the article

  • Java Plugin a huge security risk? How to preseve Java plugin from privilege escalation?

    - by Johannes Weiß
    Installing a regular Java plugin is IMHO a real security risk for non-IT people. Normally Java applets run in a sandbox and the applet cannot do anything harmful to your computer. If an applet, however, needs to do something like read-only accessing your filesystem e.g. uploading an image, you have to give it more privileges. Usually that's ok but I think not everyone knows that you give the applet the same privileges to your computer as your user has! And that's everything Java asks you: That looks as 'harmful' as a self-signed SSL certificate on a random page where no sensitive data is exchanged. The user will click on Run! You can try that at home using JyConsole, that's Jython (Python on Java)! Simply type in python code, e.g. import os os.system('cat /etc/passwd') or worse DON'T TYPE IN THAT CODE ON YOUR COMPUTER!!! import os os.system('rm -rf ~') ... Does anyone know how you can disable the possibily of privilege escalation? And by the way, does anyone know why SUN displays only a dialog as harmless as the one shown above (the self-signed-SSL-certificate-dialog from Firefox 3 and above is much clearer here!)? Live sample from my computer:

    Read the article

  • Openpgp does not work in my Thunderbird-Installation

    - by zerozero
    Hello community, as mentioned above - i encountered serious troubles. here are the versions of the related software: SuSE 11.2, Thunderbird v.3.1.6, released October 27, 2010, Firefox 3.6 v3.6.12 I created an installations with an own partition both for the user and for the TB-Mails. For a new installation on another hardware i took these partitions. When i wanted to read the emails, i got an error-message like this: The GPG-agent for your GnuPG-version 2.0.12 couldn´t get started Further i got an error message for the access on services of enigmail. The file jar:file:///usr/lib/mozilla/extensions/{3550f703-e582-4d05-9a08-453d09bdfdc6}/{847b3a00-7ab1-11d4-8f02-006008948af5}/chrome/enigmail.jar!/locale/de-DE/enigmail/help/initError.html couldn´t get found. I found out, that this path doesn´t come from TB nor from firefox, but from enigmail. I installed several (un-)pack-programs. the only effect was, that in the menu of TB the entry for OpenPGP appeared in the TB-Menu. The errors as described above repeat at every try to read an email. I deleted and re-installed enigmail, but the errors dont´t disappear. What can i do to get rid these error-messages ? Thanks in advance

    Read the article

  • Broken Creative DTT3500 Digital

    - by djechelon
    Hello, many years ago I bought a Creative DTT3500 Digital surround speakers system. It's been a while since my home theatre started behaving strange. During the past 6 months, I have noticed that the amplifier needed some time before correctly playing digital audio. Until a couple of weeks ago, it needed up to 5 minutes of very noisy "warm up" before playing an "almost" clear audio. Today, I found that even using the analog input (completely disabling digital inputs) the noise persists. Powering off and on the speakers doesn't work. Now, it's reasonable that after almost 10 years, the amplifier got broken. My problem is that I can't simply replace the speakers with another kit from another brand, since 2 speakers in my room are placed in the back wall, with coaxial cables going under the floor but not completely inside a single tube. I mean, I have coaxial extension cables starting from the DTT3500 amplifier, going into the wall, then under the floor in a plastic cable tube, then exiting the tube (still under the floor), connecting to the male coaxial connector of the original cable supplied by Creative, and continuing up to the point close to my bed where the speakers are. I could replace the speakers **only as soon as the new speakers have the same coaxial to bi-polar cables that Creative ships with DTT3500 Another option might be "repairing the amplifier", but I don't know if it's feasible, what should I or a technician look for in the amplifier, and how much would it cost (is it worth to repair or buy a new kit considering the problem above? What do you suggest me to do, basing on the above observation? Thank you.

    Read the article

  • Random Sampling in Excel

    - by bonsvr
    I have an Excel sheet as follows: NO NAME AMOUNT 1 A 50 1 B 50 2 A 100 2 C 100 3 D 70 3 B 70 4 A 30 4 F 30 5 C 150 5 G 150 . . . . There are let's say 10,000 rows. I want to get a random sample from rows. There are 2 conditions: 1. Sampling must be based on "NO" column. 2. Size of the sample is determined by the user: it can be %5, %10 or %20. For example, one decides to randomly choose %20 of total rows in the above example: The result is like: NO NAME AMOUNT 2 A 100 2 C 100 90 Z 500 90 E 500 . . . . There should be 2,000 rows. I don't know whether my question is too specific. I am new to Excel VBA, and I faced a situation like this. Above process is about getting a random sample from an account ledger for auditing purposes.

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • Do I need liquid cooling?

    - by Mrrvomun
    I'm building a computer mainly for gaming and developing games. It's going to be a three screen system with two GeForce GTX 460's and a quad-core i7. The newegg wattage calculator says I need around 900W. The case I intend to get is this one. Full specs if you need 'em are at the end of this post. I have no intentions to overclock the system at the moment, but this may change in the future. I've done a lot of research on the subject, and the answers I've found indicate that it takes a heck of a lot of power to require liquid cooling, and most non-overclocked systems don't need it. But I haven't seen a question about a system with two GPU's, so I ask you the following two questions: Assuming that the system is used for gaming for very extended periods of time (say 4-6 hours at a time, nonstop) with all three screens running at full 1080p, would the fans installed in the system suffice? Or would I need more fans and/or liquid cooling? If the system is used under the same circumstances as above, and is overclocked to a reasonable level, would the fans installed in the system suffice? Or would I need more fans and/or liquid cooling? Specs: Intel Core i7 16GB DDR3 Two nVIDIA GeForce GTX 460 3TB HDD Two DVD-RW writers Thermaltake 1050W power supply Case is linked above

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >