Search Results

Search found 519 results on 21 pages for 'configurable'.

Page 16/21 | < Previous Page | 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Mac OSX 10.8 Server DNS Domain Routing

    - by Oldek
    I just cant seem to figure out the logic in how to configure my Mac Server. So I have set up an DNS, which will take the domain and all subdomains and point towards an IP. File: db.mydomain.com (in /var/named/) mydomain.com. 10800 IN SOA mydomain.com. admin.mydomain.com. ( 2012110903 ; serial 3600 ; refresh (1 hour) 900 ; retry (15 minutes) 1209600 ; expire (2 weeks) 86400 ; minimum (1 day) ) 10800 IN NS mydomain.com. 10800 IN A 10.0.1.2 www.mydomain.com. 10800 IN A 10.0.1.2 So I want all of these requests to be requested to the 10.0.1.2 server, as I run 2 servers in my cluster. This one has always handled the requests, and now I want to add a server in between. So the server in between will get all the signals from my router which NAT the trafic coming from outside. So after setting this up and trying to point my port 80 towards my new server which will be the middle point, it doesn't work. Is it even possible to do it this way? First server: Mac Second server: Linux So what I try to achieve once more: 1. User goes to mydomain.com or www.mydomain.com 2. User request gets handled by my first server 3. First server refers to a local server, which is only available locally (it is configured to allow requests on port 80 and handle them) 4. Second server receives signal 5. Second server returns a request (either directly send to user or send through first server, whichever is most secure and configurable) I also want to be able to set up domains that lead to other servers in the future, and some that are only available within the VPN. (If that changes anything) I hope some kind soul could help me with this, it is really cumbersome for my mind to get the logic here. Do I have to configure my other server in any way? /Marcus

    Read the article

  • rake migration aborted: could not find table 'roles'

    - by user464180
    I just inherited code that I'm attempting to run the migrations for but I keep getting a rake aborted error. I've come across others that have what appears to be similar issues, but most involved Heroku and I'm trying to run this locally (to start.) I've tried troubleshooting using both PostgreSQL and SQLite, and both produce the same issue. The table "roles" referenced is the second migration called, so I'm having a hard time figuring out what is causing it to not get built. Any and all assistance is greatly appreciated. Thanks in advance. Here's the roles migration: class CreateRoles < ActiveRecord::Migration def change create_table :roles do |t| t.string :name t.timestamps end end end Here is the trace for SQLite: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! Could not find table 'roles' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/sqlite_adapter.rb:470:in `table_structure' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/sqlite_adapter.rb:351:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/schema_cache.rb:12:in `block in initialize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `yield' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `default' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:248:in `column_names' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:261:in `column_methods_hash' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:69:in `all_attributes_exists?' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:27:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/initializ ers/constants.rb:1:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `block in load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:588:in `block (2 levels) in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `block in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `instance_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:55:in `block in run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:136:in `initialize!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/rail tie/configurable.rb:30:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/environme nt.rb:5:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `block in require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:103:in `require_environment!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:292:in `block (2 levels) in initialize_tasks' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `call' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `block in execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :158:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :176:in `block in invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :157:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :144:in `invoke' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:116:in `invoke_task' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block (2 levels) in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:88:in `top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:66:in `block in run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:63:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/bin/rake:33:in ` <top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `<main>' Tasks: TOP => db:migrate => environment Here is the trace for PostgreSQL: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! PG::Error: ERROR: relation "roles" does not exist LINE 4: WHERE a.attrelid = '"roles"'::regclass ^ : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a .attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"roles"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1106:in `async_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1106:in `exec_no_cache' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:650:in `block in exec_query' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/abstract_adapter.rb:280:in `block in log' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/notifications/instrumenter.rb:20:in `instrument' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/abstract_adapter.rb:275:in `log' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:649:in `exec_query' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1231:in `column_definitions' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:845:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/schema_cache.rb:12:in `block in initialize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `yield' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `default' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:248:in `column_names' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:261:in `column_methods_hash' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:69:in `all_attributes_exists?' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:27:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/initializ ers/constants.rb:1:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `block in load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:588:in `block (2 levels) in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `block in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `instance_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:55:in `block in run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:136:in `initialize!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/rail tie/configurable.rb:30:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/environme nt.rb:5:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `block in require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:103:in `require_environment!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:292:in `block (2 levels) in initialize_tasks' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `call' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `block in execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :158:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :176:in `block in invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :157:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :144:in `invoke' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:116:in `invoke_task' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block (2 levels) in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:88:in `top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:66:in `block in run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:63:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/bin/rake:33:in ` <top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `<main>' Tasks: TOP => db:migrate => environment

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • java.util.zip.ZipException: Error opening file When Deploying an Application to Weblogic Server

    - by lmestre
    The latest weeks we had a hard time trying to solve a deployment issue.* WebLogic Server 10.3.6* Target: WLS Cluster<21-10-2013 05:29:40 PM CLST> <Error> <Console> <BEA-240003> <Console encountered the following error weblogic.management.DeploymentException:        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:69)        at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)        at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)        at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)        at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)        at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)        at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)Caused by: java.util.zip.ZipException: Error opening file - C:\Oracle\Middleware\user_projects\domains\MyDomain\servers\MyServer\stage\myapp\myapp.war Message - error in opening zip file        at weblogic.servlet.utils.WarUtils.existsInWar(WarUtils.java:87)        at weblogic.servlet.utils.WarUtils.isWebServices(WarUtils.java:76)        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:61) So the first idea you have with that error is that the war file is corrupted or has incorrect privileges.        We tried:1. Unzipping the  war file, the file was perfect.2. Checking the size, same size as in other environments.3. Checking the ownership of the file, same as in other environments.4. Checking the permissions of the file, same as other applications.Then we accepted the file was fine, so we tried enabling some deployment debugs, but no clues.We also tried:1. Delete all contents of <MyDomain/servers/<MyServer>/tmp> a and <MyDomain/servers/<MyServer>/cache> folders, the issue persisted.2. When renaming the application the deployment was sucessful3. When targeting to the Admin Server, deployment was also working.4. Using 'Copy this application onto every target for me' didn't help either.Finally, my friend 'Test Case' solved the issue again.I saw this name in the config.xml<jdbc-system-resource>    <name>myapp</name>    <target></target>    <descriptor-file-name>jdbc/myapp-jdbc.xml</descriptor-file-name>  </jdbc-system-resource> So, it turned out that customer had created a DataSource with the same name as the application 'myapp' in the above example.By deleting the datasource and created another exact DataSource with a different name the issue was solved.At this point, Do you know Why 'java.util.zip.ZipException: Error opening file' was occurring?Because all names is WebLogic Server need to be unique.References: http://docs.oracle.com/cd/E23943_01/web.1111/e13709/setup.htm"Assigning Names to WebLogic Server ResourcesMake sure that each configurable resource in your WebLogic Server environment has a unique name. Each, domain, server, machine, cluster, JDBC data source, virtual host, or other resource must have a unique name." Enjoy!

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Delete Nodes + attributes that match Xpath except specific attributes

    - by Ryan Ternier
    I'm trying to find the best (efficient) way of doing this. I have a medium sized XML document. Depending on specific settings certain portions of it need to be filtered out for security reasons. I'll be doing this in XSLT as it's configurable and no code should need changing. I've looked around, but not getting much luck on it. For example: I have the following XPath: //*[@root='2.16.840.1.113883.3.51.1.1.6.1'] Whicrooth gives me all nodes with a root attribute equal to a specific OID. In these nodes I want to have all attributes except for a few (ex. foo and bar) erased, and then having another attribute added (ex. reason) I also need to have multiple XPath expressions that can be ran to zero down on a specific node and clear it's contents out in a similar fashion, with respect to nodes with specific attributes. I'm playing around with information from: XPath expression to select all XML child nodes except a specific list? and XSLT Remove Elements and/or Attributes by Name per XSL Parameters Will update shortly when I can have access what what I"ve done so far. Example: XML Before Transformation <root> <childNode> <innerChild root="2.16.840.1.113883.3.51.1.1.6.1" a="b" b="c" type="innerChildness"/> <innerChildSibling/> </childNode> <animals> <cat> <name>bob</name> </cat> </animals> <tree/> <water root="2.16.840.1.113883.3.51.1.1.6.1" z="zed" l="ell" type="liquidLIke"/> </root> After <root> <childNode> <innerChild root="2.16.840.1.113883.3.51.1.1.6.1" flavor="MSK"/> <!-- filtered --> <innerChildSibling/> </childNode> <animals> <cat flavor="MSK" /> <!-- cat was filtered --> </animals> <tree/> <water root="2.16.840.1.113883.3.51.1.1.6.1" flavor="MSK"/> <!-- filtered --> </root>

    Read the article

  • iGoogle Stack Overflow Gadget [closed]

    - by Charango
    Since SO's question database is becoming an excellent first point of call for finding answers to coding problems, some people (like me) might like to be able to fire up searches from their iGoogle home page, among the other searches you might launch from there. I've created a very simple gadget to do this, and put the source below. My hope is that this might provide a foundation on which community members with better ideas for performing this function, or ideas for enhancing it, can update it. Perhaps we could make it configurable to search via a site scope search from Google and / or a question search, for instance. I've hosted and registered this first version but if anyone makes changes and can host a new version / new pics elsewhere, please feel free. <?xml version="1.0" encoding="UTF-8"?> <Module> <ModulePrefs title="Stack Overflow Search" author="Community Wiki" author_email="[email protected]" author_affiliation="Stack Overflow" author_location="The Internet" author_aboutme="All sorts" author_link="http://stackoverflow.com" author_quote="Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law." description="Stack Overflow is rapidly becoming one of the best resources for finding answers to your programming questions. This gadget adds a question search box to your iGoogle homepage" screenshot="http://arkios-solutions.com/misc/sogadget/SOGadget.png" thumbnail="http://arkios-solutions.com/misc/sogadget/SOGadget_Thumbnail.png" singleton="true" title_url="http://stackoverflow.com" /> <Content type="html"><![CDATA[ <img src="http://i.stackoverflow.com/Content/Img/stackoverflow-logo-250.png" alt="Stackoverflow Logo"/> <div style="font-family:arial;font-size:0.8em;"> This gadget allows you to search the Stack Overflow question database. </div> <form name='SOQueryForm' action="http://stackoverflow.com/search" method="get" target="new"> <p>Your question: <input type='text' size='34' name='q' /></p> <p><input type='submit' value='Go' /></p> </form> ]]> </Content> </Module> The source / installable gadget xml are also hosted here.

    Read the article

  • SHAREPOINT: Custom Field type property storage defined for custom field

    - by Eric Rockenbach
    ok here is a great question. I have a set of generic custom fields that are highly configurable from an end user perspective and the configuration is getting overbearing as there are nearly 100 plus items each custom field allows you to perform in the areas of Server/Client Validation, Server/Client Events/Actions, Server/Client Bindings parent/child, display properties for form/control, etc, etc. Right now I'm storing most of these values as "Text" in my field xml for my propertyschema. I'm very familiar with the multi column value, but this is not a complex custom type in sense it's an array. I also considered creating serilzable objects and stuffing them into the text field and then pulling out and de-serilizing them when editing through the field editor or acting on the rules through the custom spfield. So I'm trying to take the following for example <PropertySchema> <Fields> <Field Name="EntityColumnName" Hidden="TRUE" DisplayName="EntityColumnName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnParentPK" Hidden="TRUE" DisplayName="EntityColumnParentPK" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnValueName" Hidden="TRUE" DisplayName="EntityColumnValueName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityListName" Hidden="TRUE" DisplayName="EntityListName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntitySiteUrl" Hidden="TRUE" DisplayName="EntitySiteUrl" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> </Fields> <PropertySchema> And turn it into this... <PropertySchema> <Fields> <Field Name="ServerValidationRules" Hidden="TRUE" DisplayName="ServerValidationRules" Type="ServerValidationRulesType"> <default></default> </Field> </Fields> <PropertySchema> Ideas?????

    Read the article

  • Stand-alone Java code formatter/beautifier/pretty printer?

    - by Greg Mattes
    I'm interested in learning about the available choices of high-quality, stand-alone source code formatters for Java. The formatter must be stand-alone, that is, it must support a "batch" mode that is decoupled from any particular development environment. Ideally it should be independent of any particular operating system as well. So, a built-in formatter for the IDE du jour is of little interest here (unless that IDE supports batch mode formatter invocation, perhaps from the command line). A formatter written in closed-source C/C++ that only runs on, say, Windows is not ideal, but is somewhat interesting. To be clear, a "formatter" (or "beautifier") is not the same as a "style checker." A formatter accepts source code as input, applies styling rules, and produces styled source code that is semantically equivalent to the original source code. A style checker also applies styling rules, but it simply reports rule violations without producing modified source code as output. So the picture looks like this: Formatter (produces modified source code that conforms to styling rules) Read Source Code → Apply Styling Rules → Write Styled Source Code Style Checker (does not produce modified source code) Read Source Code → Apply Styling Rules → Write Rule Violations Further Clarifications Solutions must be highly configurable. I want to be able to specify my own style, not simply select from a canned list. Also, I'm not looking for a general purpose pretty-printer written in Java that can pretty-print many things. I want to style Java code. I'm also not necessarily interested in a grand-unified formatter for many languages. I suppose it might be nice for a solution to have support for languages other than Java, but that is not a requirement. Furthermore, tools that only perform code highlighting are right out. I'm also not interested in a web service. I want a tool that I can run locally. Finally, solutions need not be restricted to open source, public domain, shareware, free software, commercial, or anything else. All forms of licensing are acceptable.

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21  | Next Page >