Search Results

Search found 30217 results on 1209 pages for 'database'.

Page 165/1209 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • NSIS- Error handling

    - by Pia
    I have written an installer and uninstaller in NSIS which creates and drops an sql database, which is working fine. I have written some .bat and .sql files to create and drop the database and then just call these files from NSIS script. My problem is if I keep this database open in SQL Server Management Studio and run the uninstaller ideally it should give an error message that the database is opened. In my case it shows the success message of uninstaller but dosnt drop the database properly. How can I handle this error in NSIS?

    Read the article

  • Android/ORMLite Insert Row with ID

    - by Joe M
    I'm currently using ORMLite to work with a SQLite database on Android. As part of this I am downloading a bunch of data from a backend server and I'd like to have this data added to the SQLite database in the exact same format it is on the backend server (ie the IDs are the same, etc). So, my question to you is if I populate my database entry object (we'll call it Equipment), including Equipment's generatedId/primary key field via setId(), and I then run a DAO.create() with that Equipment entry will that ID be saved correctly? I tried it this way and it seems to me that this was not the case. If that is the case I will try again and look for other problems, but with the first few passes over the code I was not able to find one. So essentially, if I call DAO.create() on a database object with an ID set will that ID be sent to the database and if it is not, how can I insert a row with a primary key value already filled out? Thanks!

    Read the article

  • Manage network database with SQL Management Studio 2005 express

    - by johntotetwoo
    hi there mates :) i'm new with database and database handling and i was wondering if it is possible to manage network database with SQL Server Management Studio Express 2005. I have here two PCs in my home and were connected via router. PC 1 has SQL Server 2005 Express and SQL Server Management Studio 2005 Express while PC 2 has only SQL Server 2005 Express. Can it be possible that PC1's management studio can manage PC2's server? im glad if you could help me with this. Happy New YEar!

    Read the article

  • Updating a database periodically with Java

    - by MSR
    I would like to perform updates to a MySQL database using two separate classes (that do different things) -- one doing so every 10 seconds, and the second every minute. I have a few gaps in my Java knowledge and I'm wondering what the best way to achieve it is. Importantly, if connectivity to the database is lost, I need reconnection attempts to occur indefinitely, and I'm guessing the use of Prepared Statements will make queries more efficient. Should the connection to the database be left open all the time or closed between the updates being run? Maybe I also need to think about clearing objects/resources out of memory if the class instances are going to be run indefinitely.

    Read the article

  • big size of database-log SQL-Server 2008

    - by t.kehl
    I have a database which is running under Microsoft SQL Server 2008. Now, I have seen, that the log of the database (ldf-file) is growing to big size. The database-file (mdf) has a size of 630MB and the log-file has a size of 12GB. I ask me now, what the reason for this can be. Is there a tool which let me seeing into the log where I can see, what is the reason for this big size? What can I do to prevent that the log is growing to this big size?

    Read the article

  • Running sql scripts against an attached database?

    - by Will
    I've got an MDF attached to an instance of Sql Server 2008 Express, and I need to run some sql scripts against it to generate tables, indexes, etc. But I can't figure out how to get this to work. If I load the scripts in Visual Studio, it only allows me to connect to the server and run it against a database. I can't choose a different provider (Microsoft Sql Server Database File), so I can't select my MDF. This leaves me the only option of running the script as individual queries, but that won't work as it appears it doesn't support TSQL CREATE statements. How can I run my sql script against an attached database?

    Read the article

  • PHP or JS to connect with fingerprint scanner save to database

    - by narong
    I have a project to set profile user and save all data to database include fingerprint also. i don't what i should start, I have USB finger scanner already to test. What i think: i should have a input box to read data from USB finger scanner than i should create a function to upload it database. but with this thinking i meet problem: i don't know data that get from USB finger scanner is image or data? if image, how i can read it to input box to save to database ? Anyone have any idea, please share me to resolve it. I am looking to see your helping soon! thanks

    Read the article

  • An attempt to attach an auto-named database for file failed in Vb.Net

    - by user2454135
    I am Trying to connect database for first time , and I am getting this error : An attempt to attach an auto-named database for file VBTestDB.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. and getting error on myconnect.Open() Heres my code : Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim myconnect As New SqlClient.SqlConnection myconnect.ConnectionString = "Data Source=.\SQLEXPRESS;AttachDbFilename=VBTestDB.mdf;Integrated Security=True;User Instance=True;" Dim mycommand As SqlClient.SqlCommand = New SqlClient.SqlCommand() mycommand.Connection = myconnect mycommand.CommandText = "INSERT INTO Card (CardNo,Name) VALUES (@cardno,@name)" myconnect.Open() Try mycommand.Parameters.Add("@cardno", SqlDbType.Int).Value = TextBox1.Text mycommand.Parameters.Add("@name", SqlDbType.NVarChar).Value = TextBox2.Text mycommand.ExecuteNonQuery() MsgBox("Success") Catch ex As System.Data.SqlClient.SqlException MsgBox(ex.Message) End Try myconnect.Close() End Sub

    Read the article

  • Using a database/index sequential file independently of the Unix distribution

    - by Helper Method
    What I'm planning to do is a) parse a file for some lines matching a regular expression b) store the match in some sort of database / file so I don't have to do the parsing again and again c) call another program passing the matches as arguments While I can imagine how to do a) and c), I'm a little bit unsure about b). The matches are of the form key:attribute1:attribute2:attribute3 where attribute 2 may be optional. I'm thinking of storing the results in a simple database but the problem is the database needs to available on a number of Unix platform for the program to work. Are there any (simple) databases which can be found on any Unix platforms? Or should I use some sort of index-sequential file?

    Read the article

  • Application Role and access second database

    - by lszk
    I have written a script to create an audit trails to my database in a second one db. So far I had no problems during tests on my dev machine from SQL Server Management Studio. Problems started to occurs when I first tried to test my triggers from my application by modyfing data in it. Using profiler I found out, that my audit trails db is not visible in sys.databases, so here lies the problem. The application using an Application Role, so as I found on MSDN, that's why I can't get access to other db on the server. I'm not a DBA. I have no experience with properly settings the security stuff, so please guide me, how can I set the setting for guest account (according to MSDN) to get access to this db? I need to have a record for this database in sys.databases and I need to be able to insert data in this database in all tables. No select, update or delete I need.

    Read the article

  • Ruby On Rails - Collection Select - MYSQL Database - Problem Displaying ampersand ("&")

    - by dbkbaki
    I am having an annoying problem displaying the labels of a select box correctly where there is an ampersand contained within the label string. On a form being rendered with the form_for helper the collection_select reads data from a Mysql 5.075 database the text stored in the database is "Surabaya & Surrounding Areas" when rendered and displayed in firefox 3.6 or safari is is displaying as "Surabaya %amp; Surrounding Areas". The code used to render the select is as follows: <%= f.collection_select :parent_id, Destination.roots, :id, :name, {:include_blank => true} %> I have tried adding a h(:name) and also storing && in the database but it still will not display the ampersand correctly. Have searched on google for what I thought would be a simple solution but cant find anything that solves this. Using ROR 2.3.5/Ruby 1.8.7 If anyone has a solution it will be much appreciated. many thanks, David

    Read the article

  • Copying just the data from one Database to another

    - by monksy
    I'm not sure if this is the site for this question or not [if so put in the comment or vote to move it] How can I copy only the data from one database to another within the same server on SQL Server 2005? The two databases have the same schema but not the same data. I'm trying to get the data from one database to another. I am not able to restore from a snapshot [that screws over the security settings on the database]. I'm not able to use the import data wizard, because that is trying to copy over schema data as well.

    Read the article

  • Pass database data to multiples views-Laravel

    - by user3696018
    I have a database with details of daily sales. To query a database, I have a form in a view with parameters that will query as date of admission, client and others. The result is shown in another view with the daily details of income, and below is a summary of the article do all entered. The summary I wish to transfer to another view, try to view :: composer but only transfer the empty query (I saw it with debug bar). Just appeared an empty view. How I can transfer data from the database without the latter view is empty? The second html view is totaly diferent , only the data is the same.

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • Oracle® Database Express Edition roblem running on Win Server 2003 with MS SQl Server 2008 [closed]

    - by totoz
    Hi I have on Win Server 2003 MS SQL Server 2008 and also IIS is running. I try learn Oracle, so first I installed Oracle® Database Express Edition. I tried connect viac web browser on Oracle Server on url http://127.0.0.1:8080/apex I got this expcetion in browser The page cannot be found The page you are looking for might have been removed, had its name changed, or is temporarily unavailable. Please try the following: Make sure that the Web site address displayed in the address bar of your browser is spelled and formatted correctly. If you reached this page by clicking a link, contact the Web site administrator to alert them that the link is incorrectly formatted. Click the Back button to try another link. HTTP Error 404 - File or directory not found. Internet Information Services (IIS) Technical Information (for support personnel) Go to Microsoft Product Support Services and perform a title search for the words HTTP and 404. Open IIS Help, which is accessible in IIS Manager (inetmgr), and search for topics titled Web Site Setup, Common Administrative Tasks, and About Custom Error Messages. Why I can not log on Oracle Home Page?

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • schedule backup and restore of SSAS 2008 database

    - by Manjot
    Hi, I can backup and restore databases on Microsoft SQL server Analysis Service 2008 using GUI as from Backup SSAS I want to schedule backup and restore it to another server every night. so what i did is : I scripted out the backup and restore process from the GUI. Created a new SQL server agent job in database engine and added a "Run SSAS query" step. Copied the scripts to this step. But it fails. the scripts that the GUI copied out look like: <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <Object> <DatabaseID>DB</DatabaseID> </Object> <File>C:\Backup\DB.abf</File> <AllowOverwrite>true</AllowOverwrite> </Backup> <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <File>\\server\C$\Backup\DB.abf</File> <DatabaseName>DB</DatabaseName> <AllowOverwrite>true</AllowOverwrite> </Restore> Any help please?

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • How to create a mysql database that can contain any character, also different languages

    - by Jakke
    I'm trying to create a database that has to contain articles in different languages. I'm using Mariadb as my server and I know bits of SQL. My knowledge doesn't really cover details like the differences between engines like MyISAM, InnoDB etc or character sets like utf8/16/32, latin 5/7/etc. I do know that the character set has importance, I guess what I'm looking for is an all-encompassing character set and an engine that best deals with this type of content. Also, is there an advantage in storing articles in multiple data rows (equivalent of different pages) to make things a little faster, or would you store a whole article in a single data row. Or does that depend on the size of the articles? Sorry for my noobish question, I know the information is all out on the internet but it would take me quite a long time to research and get a grip on everything. Would be cool if someone with experience could give me a little head start and point me in the right direction. This is for a intranet site, consider the content to be somewhat like a blog (and no, I don't want wordpress or something similar at this point). Not sure if it matters, but I tend to create and manipulate my tables with phpmyadmin, I use apache as web server and it all runs on Linux.

    Read the article

  • Issues with "There is already an object named 'xxx' in the database'

    - by Hoser
    I'm fairly new to SQL so this may be an easy mistake, but I haven't been able to find a solid solution anywhere else. Problem is whenever I try to use my temp table, it tells me it cannot be used because there is already an object with that name. I frequently try switching up the names, and sometimes it'll let me work with the table for a little while, but it never lasts for long. Am I dropping the table incorrectly? Also, I've had people suggest to just use a permanent table, but this database does not allow me to do that. create table #RandomTableName(NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue decimal) select vPerformanceRule.ObjectName, vPerformanceRule.CounterName, Perf.vPerfRaw.SampleValue into #RandomTableName from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfRaw where (ObjectName like 'Processor' AND CounterName like '% Processor Time') OR(ObjectName like 'System' AND CounterName like 'Processor Queue Length') OR(ObjectName like 'Memory' AND CounterName like 'Pages/Sec') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk Queue Length') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk sec/Read') OR(ObjectName like 'Physical Disk' and CounterName like '% Disk Time') OR(ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 70 AND SampleValue < 100) order by ObjectName, SampleValue drop table #RandomTableName

    Read the article

  • How can i clear a SQLite database each time i start my app

    - by AndroidUser99
    Hi, i want that each time i start my app my SQLite database get's cleaned for this, i need to make a method on my class MyDBAdapter.java code examples are welcome, i have no idea how to do it this is the dbadapter/helper i'm using: public class MyDbAdapter { private static final String TAG = "NotesDbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "gpslocdb"; private static final String PERMISSION_TABLE_CREATE = "CREATE TABLE permission ( fk_email1 varchar, fk_email2 varchar, validated tinyint, hour1 time default '08:00:00', hour2 time default '20:00:00', date1 date, date2 date, weekend tinyint default '0', fk_type varchar, PRIMARY KEY (fk_email1,fk_email2))"; private static final String USER_TABLE_CREATE = "CREATE TABLE user ( email varchar, password varchar, fullName varchar, mobilePhone varchar, mobileOperatingSystem varchar, PRIMARY KEY (email))"; private static final int DATABASE_VERSION = 2; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(PERMISSION_TABLE_CREATE); db.execSQL(USER_TABLE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS user"); db.execSQL("DROP TABLE IF EXISTS permission"); onCreate(db); } } /** * Constructor - takes the context to allow the database to be * opened/created * * @param ctx the Context within which to work */ public MyDbAdapter(Context ctx) { this.mCtx = ctx; } /** * Open the database. If it cannot be opened, try to create a new * instance of the database. If it cannot be created, throw an exception to * signal the failure * * @return this (self reference, allowing this to be chained in an * initialization call) * @throws SQLException if the database could be neither opened or created */ public MyDbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createUser(String email, String password, String fullName, String mobilePhone, String mobileOperatingSystem) { ContentValues initialValues = new ContentValues(); initialValues.put("email",email); initialValues.put("password",password); initialValues.put("fullName",fullName); initialValues.put("mobilePhone",mobilePhone); initialValues.put("mobileOperatingSystem",mobileOperatingSystem); return mDb.insert("user", null, initialValues); } public Cursor fetchAllUsers() { return mDb.query("user", new String[] {"email", "password", "fullName", "mobilePhone", "mobileOperatingSystem"}, null, null, null, null, null); } public Cursor fetchUser(String email) throws SQLException { Cursor mCursor = mDb.query(true, "user", new String[] {"email", "password", "fullName", "mobilePhone", "mobileOperatingSystem"} , "email" + "=" + email, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public List<Friend> retrieveAllUsers() { List <Friend> friends=new ArrayList<Friend>(); Cursor result=fetchAllUsers(); if( result.moveToFirst() ){ do{ //note.getString(note.getColumnIndexOrThrow(NotesDbAdapter.KEY_TITLE))); friends.add(new Friend(result.getString(result.getColumnIndexOrThrow("email")), "","","","")); }while( result.moveToNext() ); } return friends; } }

    Read the article

  • Doesn't get the output in Java Database Connectivity

    - by Dooree
    I'm working on Java Database Connectivity through Eclipse IDE. I built a database through Ubuntu Terminal, and I need to connect and work with it. However, when I tried to run the following code, I don't get any error, but the following output is showed, anybody knows why I don't get the output from the code ? //STEP 1. Import required packages import java.sql.*; public class FirstExample { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost/EMP"; // Database credentials static final String USER = "username"; static final String PASS = "password"; public static void main(String[] args) { Connection conn = null; Statement stmt = null; try{ //STEP 2: Register JDBC driver Class.forName("com.mysql.jdbc.Driver"); //STEP 3: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL,USER,PASS); //STEP 4: Execute a query System.out.println("Creating statement..."); stmt = conn.createStatement(); String sql; sql = "SELECT id, first, last, age FROM Employees"; ResultSet rs = stmt.executeQuery(sql); //STEP 5: Extract data from result set while(rs.next()){ //Retrieve by column name int id = rs.getInt("id"); int age = rs.getInt("age"); String first = rs.getString("first"); String last = rs.getString("last"); //Display values System.out.print("ID: " + id); System.out.print(", Age: " + age); System.out.print(", First: " + first); System.out.println(", Last: " + last); } //STEP 6: Clean-up environment rs.close(); stmt.close(); conn.close(); }catch(SQLException se){ //Handle errors for JDBC se.printStackTrace(); }catch(Exception e){ //Handle errors for Class.forName e.printStackTrace(); }finally{ //finally block used to close resources try{ if(stmt!=null) stmt.close(); }catch(SQLException se2){ }// nothing we can do try{ if(conn!=null) conn.close(); }catch(SQLException se){ se.printStackTrace(); }//end finally try }//end try System.out.println("Goodbye!"); }//end main }//end FirstExample <ConnectionProperties> <PropertyCategory name="Connection/Authentication"> <Property name="user" required="No" default="" sortOrder="-2147483647" since="all"> The user to connect as </Property> <Property name="password" required="No" default="" sortOrder="-2147483646" since="all"> The password to use when connecting </Property> <Property name="socketFactory" required="No" default="com.mysql.jdbc.StandardSocketFactory" sortOrder="4" since="3.0.3"> The name of the class that the driver should use for creating socket connections to the server. This class must implement the interface 'com.mysql.jdbc.SocketFactory' and have public no-args constructor. </Property> <Property name="connectTimeout" required="No" default="0" sortOrder="9" since="3.0.1"> Timeout for socket connect (in milliseconds), with 0 being no timeout. Only works on JDK-1.4 or newer. Defaults to '0'. </Property> ...

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >