Search Results

Search found 3407 results on 137 pages for 'automatic logon'.

Page 102/137 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Security Alert for CVE-2012-4681 Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice again! Oracle has just released Security Alert CVE-2012-4681 to address 3 distinct but related vulnerabilities and one security-in-depth issue affecting Java running in desktop browsers.  These vulnerabilities are: CVE-2012-4681, CVE-2012-1682, CVE-2012-3136, and CVE-2012-0547.  These vulnerabilities are not applicable to standalone Java desktop applications or Java running on servers, i.e. these vulnerabilities do not affect any Oracle server based software. Vulnerabilities CVE-2012-4681, CVE-2012-1682, and CVE-2012-3136 have each received a CVSS Base Score of 10.0.  This score assumes that the affected users have administrative privileges, as is typical in Windows XP.  Vulnerability CVE-20120-0547 has received a CVSS Base Score of 0.0 because this vulnerability is not directly exploitable in typical user deployments, but Oracle has issued a security-in-depth fix for this issue as it can be used in conjunction with other vulnerabilities to significantly increase the overall impact of a successful exploit. If successfully exploited, these vulnerabilities can provide a malicious attacker the ability to plant discretionary binaries onto the compromised system, e.g. the vulnerabilities can be exploited to install malware, including Trojans, onto the targeted system.  Note that this malware may in some instances be detected by current antivirus signatures upon its installation.  Due to the high severity of these vulnerabilities, Oracle recommends that customers apply this Security Alert as soon as possible.  Furthermore, note that the technical details of these vulnerabilities are widely available on the Internet and Oracle has received external reports that these vulnerabilities are being actively exploited in the wild.    Developers should download the latest release at http://www.oracle.com/technetwork/java/javase/downloads/index.html   Java users should download the latest release of JRE at http://java.com, and of course   Windows users can take advantage of the Java Automatic Update to get the latest release. For more information: The Advisory for Security Alert CVE-2012-4681 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2012-4681-1835715.html  Users can verify that they’re running the most recent version of Java by visiting: http://java.com/en/download/installed.jsp    Instructions on removing older (and less secure) versions of Java can be found at http://java.com/en/download/faq/remove_olderversions.xml   

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • Variable number of GUI Buttons

    - by Wakaka
    I have a generic HTML5 Canvas GUI Button class and a Scene class. The Scene class has a method called createButton(), which will create a new Button with onclick parameter and store it in a list of buttons. I call createButton() for all UI buttons when initializing the Scene. Because buttons can appear and disappear very often during rendering, Scene would first deactivate all buttons (temporarily remove their onclick, onmouseover etc property) before each render frame. During rendering, the renderer would then activate the required buttons for that frame. The problem is that part of the UI requires a variable number of buttons, and their onclick, onmouseover etc properties change frequently. An example is a buffs system. The UI will list all buffs as square sprites for the current unit selected, and mousing over each square will bring up a tooltip with some information on the buff. But the number of buffs is variable thus I won't know how many buttons to create at the start. What's the best way to solve this problem? P.S. My game is in Javascript, and I know I can use HTML buttons, but would like to make my game purely Canvas-based. Create buttons on-the-fly during rendering. Thus I will only have buttons when I require them. After the render frame these buttons would be useless and removed. Create a fixed set of buttons that I'm going to assume the number of buffs per unit won't exceed. During each render frame activate the buttons accordingly and set their onmouseover property. Assign a button to each Buff instance. This sounds wrong as the buff button is a part of the GUI which can only have one unit selected. Assigning a button to every single Buff in the game seems to be overkill. Also, I would need to change the button's position every render frame since its order in the unit's list of buffs matter. Any other solutions? I'm actually quite for idea (1) but am worried about the memory/time issues of creating a new Button() object every render frame. But this is in Javascript where object creation is oh-so-common ({} mainly) due to automatic garbage collection. What is your take on this? Thanks!

    Read the article

  • A Windows Update Prevented Live Discs From Working? [migrated]

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????·???? ?????:??????·?????? ~???????????Platinum???????????~ Pickup!:????????????????????? ???????|??????????|???????? ????????:?Oracle DB?????????????????????Windows?VMware?? ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? View RSS feed ????? ????? Oracle?????????????????????????!???????????? View RSS feed ????? ???? ????????? Oracle Database ?????? ??????? ?????????(????????, ???, etc) ????????(???, REDO, ????????, etc) ????·????????????????? ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Amazon EC2|Microsoft Excel MSFC/MSCS(Microsoft Cluster Service) Exadata|Database Firewall SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager ????????????: Oracle Application Testing Suite Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) Oracle Direct Seminar(?????????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! View RSS feed ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN????? ?????? ?????? ???????? ???? ???????

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • Ubuntu 12.04 Preseed LDAP Config

    - by Arturo
    I'm trying to deploy Ubuntu 12.04 via xCAT, everything works except the automatic configuration of LDAP, the preseed file is read but the file /etc/nsswitch is not written properly. My Preseed File: [...] ### LDAP Setup nslcd nslcd/ldap-bindpw password ldap-auth-config ldap-auth-config/bindpw password ldap-auth-config ldap-auth-config/rootbindpw password ldap-auth-config ldap-auth-config/binddn string cn=proxyuser,dc=example,dc=net libpam-runtime libpam-runtime/profiles multiselect unix, ldap, gnome-keyring, consolekit, capability ldap-auth-config ldap-auth-config/dbrootlogin boolean false ldap-auth-config ldap-auth-config/rootbinddn string cn=manager,dc=xcat-domain,dc=com nslcd nslcd/ldap-starttls boolean false nslcd nslcd/ldap-base string dc=xcat-domain,dc=com ldap-auth-config ldap-auth-config/pam_password select md5 ldap-auth-config ldap-auth-config/move-to-debconf boolean true ldap-auth-config ldap-auth-config/ldapns/ldap-server string ldap://192.168.32.42 ldap-auth-config ldap-auth-config/ldapns/base-dn string dc=xcat-domain,dc=com ldap-auth-config ldap-auth-config/override boolean true libnss-ldapd libnss-ldapd/clean_nsswitch boolean false libnss-ldapd libnss-ldapd/nsswitch multiselect passwd,group,shadow nslcd nslcd/ldap-reqcert select ldap-auth-config ldap-auth-config/ldapns/ldap_version select 3 ldap-auth-config ldap-auth-config/dblogin boolean false nslcd nslcd/ldap-uris string ldap://192.168.32.42 nslcd nslcd/ldap-binddn string [...] After the installation, nsswitch.conf rimains unchanged. Has someone an idea?? Thanks!

    Read the article

  • BAD ARCHIVE MIRROR using PXE BOOT method

    - by omkar
    i m trying to automatically install UBUNTU on a client PC by using the method of PXE BOOT method....my Objectives are below:- i m following the steps given in this link installation using PXE BOOT INSTALL 1:-the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. 2:-the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server i have installed DHCP3-server,Apache2 and TFTP for helping me with the installation. i have nearly achieved my first objective,i m able to boot my client using the files stored in the server,but during the installation stage it is asking me to "CHOOSE A MIRROR of UBUNTU ARCHIVE".i gave the server's IP address and the path in the server where the files are located but then too its giving me error "BAD ARCHIVE MIRROR". so is it possible that instead of downloading all the files from the internet and storing them on my disk , can i use the files which comes with the UBUNTU-CD, and how to store this files in what format (should i zip them ) on the disk. secondly i am also generating the ks.cfg which i wanted to give to the client for automatic installation of the OS ,so how should the configuration file be given to the installation process.

    Read the article

  • Cloudmin KVM DNS hostnames not working

    - by dannymcc
    I have got a new server which has Cloudmin installed. It's working well and I can create and manage VM's as expected. The server came with a /29 subnet and I requested an additional /29 subnet to allow for more virtual machines. I didn't want to replace the existing /29 subnet with a /28 because that would have caused disruption with my existing VM's. To make life easier I decided to configure a domain name for the Cloudmin host server to allow for automatic hostname setup whenever I create a new virtual machine. I have a domain name (example.com) and I have created an NS record as follows: NS kvm.example.com 123.123.123.123 A kvm.example.com 123.123.123.123 In the above example the IP address is that of the host server, I also have two /29 subnets routed to the server. Now, I've added the two subnets to the Cloudmin administration panel as follows: I've tried to hide as little information as possible without giving all of the server details away! If I ping kvm.example.com I get a response from 123.123.123.123, if I ping the newly created virtual machine (example.kvm.example.com) it fails, and if I ping the IP address that's been assigned to the new virtual machine (from the second subnet) it fails. Am I missing anything vital? Does this look (from what little information I can show) like it's setup correctly? Any help/pointers would be appreciated. For reference the Cloudmin documentation I am using as a guide is http://www.virtualmin.com/documentation/cloudmin/gettingstarted

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • MS Access 2003 - Format a Number with Commas AND Auto-Decimal

    - by Emtucifor
    On a report I have a control which is bound to a column which can have up to 3 decimal places. I want the number to format with commas separating thousands and millions, but I also want the number of decimal places to be automatic, so that if there is no decimal portion then no decimal at all is shown. 1234.567 -> 1,234.567 1234.560 -> 1,234.56 1234.500 -> 1,234.5 1234.000 -> 1,234 General format will give me the auto decimal places but no commas. Standard format gives the comma but is fixed to 2 decimal places. Doing my own =Format(Number, "#,##0.#") leaves the decimal point in and doesn't align properly, with extra space on the right of the number. Do I have to write my own VB function to give the format I want? It seems silly that Access (apparently) can't do this out of the box. This also seems really horrible: =Replace(Replace(Replace(Replace(Replace( _ Format(Number, "#,##0.000") & "x", "0x", ""), "0x", ""), "0x, ""), ".x", ""), "x", "")

    Read the article

  • Proliant RAID 1 Rebuild Questions

    - by Nicholas
    I have a HP Proliant ML350 G5 server that experienced a power supply failure overnight. The power supply was replaced but unfortunately it got restarted with only 1 disk in the RAID 1 set plugged in. (The raid controller is the build in E200i). The raid BIOS then said on start-up that it had entered Interim Recovery Mode. However I would have expected it to still start up with only the 1 drive. The bios however says that it cannot find a C: drive and enters a reboot loop polling the other boot devices. First question is, is this normal behaviour not to start up on 1 disk? The second drive was then plugged in (all drives are ok) and the raid bios started an automatic rebuild on that disk. This appears to be a background process as there is no progress shown. However based on the light flashing it looks like it is working. My second question is how long will this rebuild take? (36GB 15K SAS drive). I cannot see any error messages and it looks like it is rebuilding the drive ok, but the computer still will not start-up. It still says during the boot up process that the C: drive is not found. If I wait for the rebuild to finish, is it likely to fix itself and find the C: drive? Or is there some other problem here?

    Read the article

  • APC + FPM memory fragmentation without a reason

    - by palmic
    My setup @ debian 6-testing is: nginx 1.1.8 + php 5.3.8 + php-fpm + APC 3.1.9 Because i use automatic deployment with apc_clear_cache() after deploy, my target is to set up APC to cache all my project (up to 612 php files smaller than 1MB) without files changes checking. My confs: FPM: pm.max_children = 25 pm.start_servers = 4 pm.min_spare_servers = 2 pm.max_spare_servers = 10 pm.max_requests = 500 APC: pc.enabled="1" apc.shm_segments = 1 apc.shm_size="64M" apc.num_files_hint = 640 apc.user_entries_hint = 0 apc.ttl="7200" apc.user_ttl="7200" apc.gc_ttl="600" apc.cache_by_default="1" apc.filters = "apc\.php$,apc_clear.php$" apc.canonicalize="0" apc.slam_defense="1" apc.localcache="1" apc.localcache.size="512M" apc.mmap_file_mask="/tmp/apc-php5.XXXXXX" apc.enable_cli="0" apc.max_file_size = 2M apc.write_lock="1" apc.localcache = 1 apc.localcache.size = 512 apc.report_autofilter="0" apc.include_once_override="0" apc.coredump_unmap="0" apc.stat="0" apc.stat_ctime="0" The problem is i have my APC memory fragmented into 5 pieces at 14,5MB loaded into memory which capacity is 64M. My system memory is 640MB, used about 270MB at most. The http responses lasts about 300ms - 5s. When i switch on apc.stat="1", the responses are about 50ms - 80ms which is quite good, but i cannot understand why is apc.stat="0" so much whorse. The APC diagnostic tool shows "1 Segment(s) with 64.0 MBytes (mmap memory, pthread mutex Locks locking)" in general cache information window so i hope i am right that it's mmap setup which allows tweaking APC shm onto upper values than shows system limit in /proc/sys/kernel/shmmax (my shows 33MB).

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • Debian Squeeze and exim4: cannot send mail

    - by Fernando Campos
    Hello guys, Got this error after install and config of exim4-daemon-light and mailutils packages on Debian Squeeze. This package is aimed to send automatic messages from websites, like email confirmation and stuff. Configuration after package install: dpkg-reconfigure exim4-config You'll be presented with a welcome screen, followed by a screen asking what type mail delivery you'd like to support. Choose the option for "internet site" and select "Ok" to continue. After many configuration sceens you can test mail with: echo "test message" | mail -s "test message" [email protected] Here is the response: root@server:/etc# echo "test message" | mail -s "test message" [email protected] 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 Cannot open main log file "/var/log/exim4/mainlog": Permission denied: euid=101 egid=103 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 <= root@debian U=root P=local S=331 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 Cannot open main log file "/var/log/exim4/mainlog": Permission denied: euid=101 egid=103 exim: could not open panic log - aborting: see message(s) above Can't send mail: sendmail process failed with error code 1 There is no /var/log/exim4 directory on my server. I tried to create it, but it didn't work. Please, can someone help me? Best regards, Fernando

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Calling XAI Inbound Services from Oracle BI Publisher

    - by ACShorten
    Note: This technique requires Oracle BI Publisher 1.1.3.4.1 which supports Service Complex Types. Web Services require credentials for authentication. Note: The deafults for the product installation are used in this article. If your site uses alternative values then substitute those alternatives where applicable. Note: Examples shown in this article are examples for illustrative purposes only. When building a report in Oracle BI Publisher it may be necessary to call an XAI Inbound Service to get information via the object rather than directly calling the database tables for various reasons: The CLOB fields used in the Object are accessible for a report. Note: CLOB fields cannot be used as criteria in the current release. Objects can take advantage of algorithms to format or calculate additional data that is not stored in the database directly. For example, Information format strings can automatically generated by the object which gives consistent information between a report and the online screens. To use this facility the following process must be performed: Ensure that the product group, cisusers by default, is enabled for the SPLServiceBean in the console. This allows BI Publisher access to call Web Services directly. To ensure this follow the instructions below: Logon to the Oracle WebLogic server console using an appropriate administrator account. By default the user system or weblogic is provided for this purpose. Navigate to the Security Realms section and select your configured realm. This is set to myrealm by default. In the Roles and Policies section, expand the SPLService section of the Deployments option to reveal the SPLServiceBean roles. If there is no role associated with the SPLServiceBean, create a new EJB role and specify the cisusers role, by default. For example:   Add a Role Condition to the role just created, with a Predicate List of Group and specify cisusers as the Group Argument Name. For example: Save all your changes. The XAI Inbound Services to be used by BI Publisher must be defined prior to using the interface. Refer to the XAI Best Practices (Doc Id: 942074.1) from My Oracle Support or via the online help for more information about this process. Inside BI Publisher create your report, according to the BI Publisher documentation. When specifying the dataset, under the Data Model Report option, specify the following to use an XAI Inbound Service as a data source: Parameter Comment Type Web Service Complex Type true Username Any valid user name within the product. This user MUST have security access to the objects referenced in the XAI Inbound Service Password Authentication password for Username Timeout Timeout, in seconds, set for the Web Service call. For example 60 seconds. WSDL URL Use the WSDL URL on the XAI Inbound Service definition as your WSDL URL. It will be in the following format by default:http://<host>:<port>/<server>/XAIApp/xaiserver/<service>?WSDLwhere: <host> - Host Name of Web Application Server <port> - Port allocated to Web Application Server for product access <server> - Server context for server <service> - XAI Inbound Service Name Note: For customers using secure transmission should substitute https instead of http and use the HTTPS port allocated to the product at installation time. Web Service Select the name of the service that shows in the drop-down menu. If no service name shows up, it means that Publisher could not establish a connection with the server or WSDL name provided in the above URL in order to get the service name. See BI Publisher server log for more information. Method Select the name of the Method that shows in the drop-down menu. A method name should show in the Method drop-down menu once the Web Service name is selected. For example: Additionally, filters can be used from the Web Service that can be generated, required or optional, from the WSDL in the Parameter List. For example:

    Read the article

  • problem booting crusty old windows XP

    - by Carson Myers
    I have an acer aspire laptop running Windows XP home. I believe I have some virus on it, I'm not sure--I mostly just run linux in a VM on it so I wasn't too worried. I'm not sure if that virus caused this problem. The laptop wasn't recognizing my USB hard drive for some reason so I decided to restart it. When it started up, it got past the memory test, past the boot screen, (but it paused right here on a blank screen for awhile) and flashed the desktop once (like it does just before the login screen) and then crashed. I got a quick BSOD and then it restarted. Then it tried to boot again, etc etc infinite loop of failure. Well, before trying safe mode, I disabled automatic restart on system crash so I could read the blue screen. There wasn't anything important on it, it said *** STOP: 0x00000000 (0xC0000000 0x,.... ) beginning physical memory dump physical memory dump complete That's not verbatim (obviously) but it didn't help me. so I booted in safe mode, and it stopped on the driver gagp30kx.sys and then restarted (and infinite loop of failure again). I burned a recovery CD and tried that. It loaded it, and I went into repair mode. I ran chkdsk and then disabled the AGP driver. Same thing on booting in safe mode except it stopped at mup.sys instead. I enabled the AGP driver again, and ran chkdsk again from the CD. It said it found problems but didn't say it fixed them. So I ran it a second time, and it said "performing additional checking or recovery" lots of times (I can't tell how many, they went above the screen top). I tried booting again and no luck. Every time I run chkdsk after trying to boot again it says it found and fixed more errors. I think it might be whatever driver is after the AGP driver, but I don't know what it is or how to find out. Can anyone help me fix this?

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • Enabling WinRM by Group Policy

    - by SaintNick
    I'm having partial success enabling WinRM through Active Directory GPO's on our Server 2008 R2 environment. I've created a GPO that enables "Allow automatic configuration of listeners" and also enables all the necessary predefined WinRM Firewall rules. This GPO works fine for our webservers. Indeed, this is reflected by the "Server Manager Remote Management" nicely flipping to "enabled" in Server Manager Server Summary. However, the same GPO applied to both our Management servers, which are Domain Controllers, does not give the same result. I see the GPO settings being applied, including the listener as confirmed by C:\Windows\system32>winrm e winrm/config/listener Listener [Source="GPO"] Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.32.40.210, 10.32.40.211, 10.32.40.212 But in Server Manager, Server Summary, Remote Management remains on "disabled" and indeed when trying to connect to one of these machines Server Manager gives an "Access Denied". Manually enabling WinRM locally via Server Manager "Configure Server Manager Remote Management" on either of these machines works fine. What can be the cause? Can it have something to do with theses machines being DC's and needing extra settings in the GPO? Nick Reid

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • Resize Debian in VirtualBox

    - by Poni
    I have a VM with one HD of size 3GB and I'd like to enlarge its HD to 7GB. So I execute this command on the host (while guest is shutdown): VBoxManage modifyhd debian.vdi --resize 7168 Then I run the guest, Debian 6, and then: smith@debian6:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 2.8G 2.6G 60M 98% / tmpfs 61M 0 61M 0% /lib/init/rw udev 57M 160K 57M 1% /dev tmpfs 61M 0 61M 0% /dev/shm smith@debian6:~$ sudo parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 3221MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 3035MB 3034MB primary ext3 boot 2 3036MB 3220MB 185MB extended 5 3036MB 3220MB 185MB logical linux-swap(v1) smith@debian6:~$ cat /proc/partitions major minor #blocks name 8 0 3145728 sda 8 1 2962432 sda1 8 2 1 sda2 8 5 180224 sda5 So, no automatic resizing (detection) of the HD/partition (while VirtualBox, in the host, shows it's 7GB now). Ok... Then I do: smith@debian6:~$ sudo resize2fs /dev/sda1 resize2fs 1.41.12 (17-May-2010) The filesystem is already 740608 blocks long. Nothing to do! smith@debian6:~$ sudo parted GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) select /dev/sda1 Using /dev/sda1 (parted) resize WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Partition number? 1 Start? 0 End? [3034MB]? Here I'm stuck. At the above parted it asks me to resize to 3GB. No point in that, right.. What should I do in order to enlarge this partition?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >