Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 424/1276 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Java long task - Did it stop writing to file?

    - by rockit
    I am writing a lot of data to a file, and while keeping my eye on the file it eventually stopped growing in size. Essentially my task is getting information from a database, and printing out all non-unique values in column A. Since there are many rows to the database table, and the database table is across my network, this is taking days to complete. Thus I'm concerned that since the file isn't growing, that it isn't actually writing to the file anymore. Which is odd, I have no "catch"'s in my code, so if there was a problem writing to file, wouldn't it have thrown an error?! Should I let the task complete (estimate 2-3 days from today), or is there something else that I don't know going on here making my application not write to the file?! my algorithm goes something like this Declare file Create new file Open file for writing get database connection get resultset from database for each row in the resultset - write column "A" to file - if row# % 100000 then write to screen "completed " + row# + " rows" when no more rows exist close file write to screen - "completed"

    Read the article

  • Snapshot on, still deadlocks, ROWLOCK

    - by Patto
    I turned snapshot isolation on in my database using the following code ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON and got rid off lots of deadlocks. But my database still produces deadlocks, when I need to run a script every hour to clean up 100,000+ rows. Is there a way I can avoid deadlocks in my cleanup script, do I need to set ROWLOCK specifically in that query? Is there a way to increase the number of row level locks that a database uses? How are locks promoted? From row level to page level to table level? My delete script is rather simple: delete statvalue from statValue, (select dateadd(minute,-60, getdate()) as cutoff_date) cd where temporaryStat = 1 and entrydate < cutoff_date Right now I am looking for quick solution, but a long term solution would be even nicer. Thanks a lot, Patrikc

    Read the article

  • Twitter oauth php problems

    - by Patrick Gates
    I'm writing some backend script for a twitter app and heres how it's going On the app you click a button that sends you to login.php on my server which logs into my database connects to twitter with my consumer key and secret: $to = new TwitterOAuth($consumer_key, $consumer_secret); $tok = $to->getRequestToken(); $request_link = $to->getAuthorizeURL($tok); and then writes the token and secret to the database, sets a session equal to the id in the database of the token and secret and then redirects to the "$request_link" You then go through the process of logging in and such on twitter and it redirects you to callback.php on my server Callback.php consists of logging into the database again, getting the new token and secret, and then writing the new token and secret to the database and then prompts you to go back to the app Then on the app, all I'm trying to do is access the basic credentials$to->get('account/verify_credentials') and it keeps coming back "could not authenticate you" What am I doing wrong?? Thank you for all the help :)

    Read the article

  • Is there an equivalent to C++'s "friend class" in Java?

    - by Ricket
    In C++ there is a concept of a "friend", which has access to a class's private variables and functions. So if you have: class Foo { friend class Bar; private: int x; } then any instance of the class Bar can modify any Foo instance's x member, despite it being private, because Bar is a friend of Foo. Now I have a situation in Java where this functionality would come in handy. There are three classes: Database, Modifier, Viewer. The Database is just a collection of variables (like a struct). Modifier should be "friends" with Database; that is, it should be able to read and write its variables directly. But Viewer should only be able to read Database's variables. How is this best implemented? Is there a good way to enforce Viewer's read-only access of Database?

    Read the article

  • VB.NET: SQLite to MSSQL

    - by user1736785
    I have a vb.net project that uses a SQLite database. I do this by using dataset/table adapters. The client is happy and all works well. However I have just heard that they plan on providing this product to another customer that wishes to use their MSSQL database. So I am writing this post so I can mentally prepare for this before I begin. I am not a database pro and have really enjoyed the simplicity of setting up and managing an SQLite database. So any ideas on the easiest way to support MSSQL as well? I am happy to run them parallel to each other. Can I just make a separate service / middleware that syncs the SQLite database to the MSSQL on a timer and does not care about what the main app is up to? Any pointers are appreciated.

    Read the article

  • HOw do I delete record in a table by keeping certain datas??

    - by mathew
    my site has lots of incoming searches which is stored in a database to show recent queries into my website. due to high search queries my database is getting bigger in size. so what I want is I need to keep only recent queries in database say 10 records. this keeps my database small and queries will be faster. I am able to store incoming queries to database but don't know how to restrict or delete excess/old data from table. any help?? well I am using PHP and MySQL

    Read the article

  • SQLiteOpenHelper problem

    - by Harsha M V
    I have created SQLiteOpenHelper class to help me open write the database. but i am not able to invoke it from the main.java activity I have created an Class which extends the Database Helper which is stored at /Messaging/src/com/v3/messaging/DatabaseHelper.java Code: http://pastebin.com/Z5qp32xu now i have this class called Main.java which will be the first activity on the launch of the application. But how can i make the DatabaseHelper.java run just to create the database but still be at the Main.java file. The database should be created with the tables only when the db or the tables dont exist. Main.java code: http://pastebin.com/LVFVuhA0 Now when i run the program. the database is not being created :( I am trying to learn Android. So please excuse me if i forgot to tell something.

    Read the article

  • MS Access Crashed an now all Form objects and code modules are missing

    - by owlie
    I was adding a form to our Access 07 db. I copied an existing form to use as a template, renamed it, and saved it. I opened a different form to check something and Access crashed. When I reopened the database it says: "Access has detected that this database is in an inconsistent state, and will attempt to recover the database." etc. When it reopened - all forms and reports were missing. Saved queries remain. The error message states that object recovery failures will be noted in a Recovery Errors table - but this table wasn't created. The links to the be database remained intact. The database is split - I was experimenting with a form on a front-end copy which might have something to do with it. Any ideas what would cause this (I can see loosing recent work - but nixing all form objects?!) And is there any chance of recovery?

    Read the article

  • Unable to use JAR in Eclipse

    - by Myn
    Hi guys, I have just created my first JAR in Eclipse, just a simple program with a single class Database.class. It is not in a package. public class Database { public Database() { int dbInit = 1; } } I have added it as an external JAR to the build path libraries for another project in Eclipse, but for some reason I cannot get Database db = new Database(), the default constructor, to work - it's as if the contents of the JAR are not being recognised. Could anyone please offer any advice on this? Thanks very much, M

    Read the article

  • Export data from mysql table row into an array format

    - by user1804952
    I am trying to output data from table : artists row : artist into this format. Artist Names can have special characters and there are over 16k of them. It needs to be written to a file. called anything artist.php for example $Artist = array( "Name from database", "Name from database", "Name from database", "Name from database", "Name from database" ); ok sorry for not explaining. do this for ajax auto complete.. so i need to create a file with this array in it. here is the exact script http://www.brandspankingnew.net/specials/ajax_autosuggest/ajax_autosuggest_autocomplete.html

    Read the article

  • C# creating a custom user interface

    - by CSharpInquisitor
    Hi, I have a SQL database holding a number of numeric and text values that get updated regularly. The exact number/type/names of these data points can change depending on the source of the database writes. I would like to create a user interface editor, where the user can add database points to the UI and arrange them and format them as they want. If a new point is added to the database they can right click on the UI and say "add this point" and choose from a list of database points. I'm looking for some pointers on where to start on creating this editor application, could something clever be done using XAML to dynamically create std WPF controls at runtime? Thanks in advance for any help, Si

    Read the article

  • Sphinx without using an auto_increment id

    - by squeeks
    I am current in planning on creating a big database (2+ million rows) with a variety of data from separate sources. I would like to avoid structuring the database around auto_increment ids to help prevent against sync issues with replication, and also because each item inserted will have a alphanumeric product code that is guaranteed to be unique - it seems to me more sense to use that instead. I am looking at a search engine to index this database with Sphinx looking rather appealing due to its design around indexing relational databases. However, looking at various tutorials and documentation seems to show database designs being dependent on an auto_increment field in one form or another and a rather bold statement in the documentation saying that document ids must be 32/64bit integers only or things break. Is there a way to have a database indexed by Sphinx without auto_increment fields as the id?

    Read the article

  • creating a custom user interface in WPF

    - by CSharpInquisitor
    I have a SQL database holding a number of numeric and text values that get updated regularly. The exact number/type/names of these data points can change depending on the source of the database writes. I would like to create a user interface editor, where the user can add database points to the UI and arrange them and format them as they want. If a new point is added to the database they can right click on the UI and say "add this point" and choose from a list of database points. I'm looking for some pointers on where to start on creating this editor application, could something clever be done using XAML to dynamically create std WPF controls at runtime?

    Read the article

  • Postgres cannot connect to server

    - by user1408935
    Super stumped by why Postgres isn't working on a new app I just started. I've got it working for one app already. I'm using postgres.app, and it's running. I started a new app with rails new depot -d postgresql and then I went into the database.yml file and changed username to my $USER (which is what it is for the other app, which is working). So now my database.yml file has this development section: development: adapter: postgresql encoding: unicode database: depot_development pool: 5 username: <username> password: But when I run "rake db:create" or "rake db:create:all" I still got this error (in full, cause I don't know what's relevant): Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"depot_development", "pool"=>5, "username"=>"<username>", "password"=>nil} could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `initialize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `new' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `connect' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:329:in `initialize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `new' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `postgresql_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:309:in `new_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:319:in `checkout_new_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:241:in `block (2 levels) in checkout' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `loop' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `block in checkout' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:233:in `checkout' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in `block in connection' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:95:in `connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:404:in `retrieve_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:170:in `retrieve_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:144:in `connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:107:in `rescue in create_database' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:51:in `create_database' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (3 levels) in <top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (2 levels) in <top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:116:in `invoke_task' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block (2 levels) in top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block in top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:88:in `top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:66:in `block in run' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:63:in `run' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/bin/rake:33:in `<top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/bin/rake:19:in `load' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/bin/rake:19:in `<main>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `eval' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `<main>' Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"depot_test", "pool"=>5, "username"=>"<username>", "password"=>nil} I have tried createdb depot_development I have tried going into the psql environment and listing users (which included my username among them). In the same psql environment, I tried CREATE DATABASE depot; I've made sure that the pg gem is installed with bundle install, I've run "pg_ctl start", to which I got this response: pg_ctl: no database directory specified and environment variable PGDATA unset I ran "ps aux | grep postgres" to make sure postgres was running, to which I got this in return (which looks like it's doing OK, right?): <username> 10390 0.4 0.0 2425480 180 s000 R+ 6:15PM 0:00.00 grep postgres <username> 2907 0.0 0.0 2441604 464 ?? Ss 6:17PM 0:02.31 postgres: stats collector process <username> 2906 0.0 0.0 2445520 1664 ?? Ss 6:17PM 0:02.33 postgres: autovacuum launcher process <username> 2905 0.0 0.0 2445388 600 ?? Ss 6:17PM 0:09.25 postgres: wal writer process <username> 2904 0.0 0.0 2445388 1252 ?? Ss 6:17PM 0:12.08 postgres: writer process <username> 2902 0.0 0.0 2445388 3688 ?? S 6:17PM 0:00.54 /Applications/Postgres.app/Contents/MacOS/bin/postgres -D /Users/<username>/Library/Application Support/Postgres/var -p5432 The short of it, is I've been troubleshooting for a WHILE and have NO idea what's wrong. Any ideas? I'd really appreciate it, cause I'm pretty new to Rails, and this is a pretty disheartening roadblock. Thanks! EDIT -- Per request, posting the successful database.yml . It seems the difference is the inclusion of a password: development: adapter: postgresql encoding: unicode database: *******_development pool: 5 username: ******* password: ******* EDIT2 -- When I add a password to the .yml file, then run rake db:create again, I get this error. rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Creating typed WSDL’s for generic WCF services of the ESB Toolkit

    - by charlie.mott
    source: http://geekswithblogs.net/charliemott Question How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts?  Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit do not make it easy to develop clients that conform to agreed schemas\contracts. Recommendation Take a copy of the generic WSDL’s and modify it to use the proper contracts. This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations.  Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it.  Consequences I can only see the following consequences of removing the <part> wrapper: ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages.  Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component. Instructions These steps show how to implement a request-response implementation of this. WCF Receive Locations In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”: Inbound BizTalk message body  = Body Outbound WCF message body = Body Create a WSDL’s for each supported integration use-case Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).   Add any xsd schemas files imported below to this same folder.   Edit the WSDL to import schemas For example, this: <xsd:schema targetNamespace=http://microsoft.practices.esb/Imports /> … would become something like: <xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">     <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd"                            namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>     <xsd:import  schemaLocation="SuppliersDocument_V1.xsd"                              namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>     <xsd:import schemaLocation="Types\Supplier_V1.xsd"                              namespace="http://schemas.acme.co.uk/types/supplier/1.0"/>     <xsd:import  schemaLocation="GovTalk\bs7666-v2-0.xsd"                               namespace="http://www.govtalk.gov.uk/people/bs7666"/>     <xsd:import  schemaLocation="GovTalk\CommonSimpleTypes-v1-3.xsd"                             namespace="http://www.govtalk.gov.uk/core"/>     <xsd:import  schemaLocation="GovTalk\AddressTypes-v2-0.xsd"                              namespace="http://www.govtalk.gov.uk/people/AddressAndPersonalDetails"/> </xsd:schema> Modify the Input and Output message For example, this: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> … would become something like: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part"                       element="ssc:SupplierSearchEvent"                         xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" /> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part"                       element="sd:SuppliersDocument"                       xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/> </wsdl:message> This WSDL can now be added as a service reference in client solutions.

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Tough Decisions

    - by Johnm
    There was once a thriving business that employed two Database Administrators, Sam and Jim. Both DBAs were certified, educated and highly talented in their skill sets. During lunch breaks these two DBAs were often found together discussing best practices, troubleshooting techniques and the latest release notes for the upcoming version of SQL Server. They genuinely loved what they did. The maintenance of the first database was the responsibility of Sam. He was the architect of this server's setup and he was very meticulous in its configuration. He regularly monitored the health of the database, validated backup files and regularly adhered to the best practices that were advocated by well respected professionals. He was very proud of the fact that there was never a database that he managed that lost data or performed poorly. The maintenance of the second database was the responsibility of Jim. He too was the architect of this server's setup. At the time that he built this server, his understanding of the finer details of configuration were not as clear as they are today. The server was build on a shoestring budget and with very little time for testing and implementation. Jim often monitored the health of the database; but in more of a reactionary mode due to user complaints of slowness or failed transactions. Deadlocks abounded and the backup files were never validated. One day, the announcement was made that revealed that the business had hit financially hard times. Budgets were being cut, limitation on spending was implemented and the reduction in full-time staff was required. Since having two DBAs was regarded a luxury by many, this meant that either Sam or Jim were about to find themselves out of a job. Sam and Jim's boss, Frank, was faced with a very tough decision. Sam's performance was flawless. His techniques and practices were perfection. The databases he managed were reliable and efficient. His solutions are "by the book". When given a task it is certain that, while it may take a little longer, it will be done right the first time. Jim's techniques and practices were not perfect; but effective and responsive. He made mistakes regularly; but he shows that he learns from them and they often result in innovative solutions. When given a task it is certain that, while the results may require some tweaking, it will be done on time and under budget. You are Frank's best friend. He approaches you and presents this scenario. He must layoff one of his valued DBAs the very next morning. Frank asks you: "All else being equal, who would you let go? and Why?" Another pertinent question is raised: "Regardless of good times or bad, if you had to choose, which DBA would you want on your team when tough challenges arise?" Your response is. (This is where you enter a comment below)

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • MySQL for Excel 1.3.0 Beta has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.0.  This is a beta release for 1.3.x. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel As this is a beta version the MySQL for Excel product can be downloaded only by using the product standalone installer at this link http://dev.mysql.com/downloads/windows/excel/ Your feedback on this beta version is very well appreciated, you can raise bugs on the MySQL bugs page or give us your comments on the MySQL for Excel forum. Changes in MySQL for Excel 1.3.0 (2014-06-06, Beta) This section documents all changes and bug fixes applied to MySQL for Excel since the release of 1.2.1. Several new features were added, for more information see What Is New In MySQL for Excel (http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel-what-is-new.html). Known limitations: Upgrading from versions MySQL for Excel 1.2.0 and lower is not possible due to a bug fixed in MySQL for Excel 1.2.1. In that scenario, the old version (MySQL for Excel 1.2.0 or lower) must be uninstalled first. Upgrading from version 1.2.1 works correctly. <CTRL> + <A> cannot be used to select all database objects. Either <SHIFT> + <Arrow Key> or <CTRL> + click must be used instead. PivotTables are normally placed to the right (skipping one column) of the imported data, they will not be created if there is another existing Excel object at that position. Functionality Added or Changed Imported data can now be refreshed by using the native Refresh feature. Fields in the imported data sheet are then updated against the live MySQL database using the saved connection ID. Functionality was added to import data directly into PivotTables, which can be created from any Import operation. Multiple objects (tables and views) can now be imported into Excel, when before only one object could be selected. Relational information is also utilized when importing multiple objects. All options now have descriptive tooltips. Hovering over an option/preference displays helpful information about its use. A new Export Data, Advanced Options option was added that shows all available data types in the Data Type combo box, instead of only showing a subset of the most popular data types. The option dialogs now include a Refresh to Defaults button that resets the dialog's options to their defaults values. Each option dialog is set individually. A new Add Summary Fields for Numeric Columns option was added to the Import Data dialog that automatically adds summary fields for numeric data after the last row of the imported data. The specific summary function is selectable from many options, such as "Total" and "Average." A new collation option was added for the schema and table creation wizards. The default schema collation is "Server Default", and the default table collation is "Schema Default". Collation options may be selected from a drop-down list of all available collations. Quick links: MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html. MySQL on Windows blog: http://blogs.oracle.com/MySQLOnWindows. MySQL for Excel forum: http://forums.mysql.com/list.php?172. MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel. Enjoy and thanks for the support! 

    Read the article

  • Running Multiple WebLogic and OSB Domains

    - by jeff.x.davies
    I have any number of OSB domain created on my machine at any point in time. For example, I have different domains for different version of Oracle Service Bus and Oracle SOA Suite. I also have different domains for different purposes. I have a demo domain and another domain for the projects in my blog. Starting with OSB 11g and the Apache Derby server, there is a small "gotcha" if you want to create multiple domains on a devevelopment machine. When you create a new domain for OSB 11g it will use the same database info for all databases and this will cause an error when starting the admin server of the second domain (the first domain doesn't have to be running for this error to occur). Here is an example of the error message in the server console: ####<Mar 8, 2011 2:58:48 PM PST> <Critical> <JTA> <jeff-laptop> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1299625128464> <BEA-110482> <A logging last resource failed during initialization. The server cannot boot unless all configured logging last resources (LLRs) initialize. Failing reason: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table 'WL_LLR_ADMINSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'osb11gR1PS3//AdminServer' expected 'OSBCIM//AdminServer'*** ONLY the original domain and server that creates an LLR table may access it *** The solution is to create a database instance for each of your domains and this is very simple to do. After you create a domain using the Configuration Wizard, locate the wlsbjmsrpDataSource-jdbc.xml file that is found under the DOMAIN_HOME/config/jdbc directory. Near the top of the file you will see the following entry: <url>jdbc:derby://localhost:1527/osbexamples;create=true;ServerName=localhost;databaseName=osbexamples</url> You need to modify this entry with a different and unique database name. The easiest way to do this is to substiture the name of your domain. For example: <url>jdbc:derby://localhost:1527/mydomain;create=true;ServerName=localhost;databaseName=mydomain</url> will create a database named mydomain . Now, when you restart the admin server for the domain, it will create the new database for you. Do this for each domain you create on your development machine and you'll have no troubles. The process is much simpler if you are creating a domain using the Configuration Wizard. Simply name the database when you get to the Configure JDBC Component Schema step of the Configuration Wizard, select the OSB JMS Reporting Provider and set the name in the DBMS/Service field to whatever name you like, as shown in Figure 1 below. Figure 1 – Configuring the JDBC Component Schema That is all there is to it. Now you can create as many domains on your leptop or development machine as you like and not have to worry about them conflicting with each other.

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >