Search Results

Search found 40893 results on 1636 pages for 'product id'.

Page 132/1636 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • SQL SERVER – Stored Procedure and Transactions

    - by pinaldave
    I just overheard the following statement – “I do not use Transactions in SQL as I use Stored Procedure“. I just realized that there are so many misconceptions about this subject. Transactions has nothing to do with Stored Procedures. Let me demonstrate that with a simple example. USE tempdb GO -- Create 3 Test Tables CREATE TABLE TABLE1 (ID INT); CREATE TABLE TABLE2 (ID INT); CREATE TABLE TABLE3 (ID INT); GO -- Create SP CREATE PROCEDURE TestSP AS INSERT INTO TABLE1 (ID) VALUES (1) INSERT INTO TABLE2 (ID) VALUES ('a') INSERT INTO TABLE3 (ID) VALUES (3) GO -- Execute SP -- SP will error out EXEC TestSP GO -- Check the Values in Table SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO Now, the main point is: If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Let’s see the result very quickly. It is very clear that there were entries in table1 which are not shown in the subsequent tables. If SP was transactional in terms of T-SQL Query Batches, there would be no entries in any of the tables. If you want to use Transactions with Stored Procedure, wrap the code around with BEGIN TRAN and COMMIT TRAN. The example is as following. CREATE PROCEDURE TestSPTran AS BEGIN TRAN INSERT INTO TABLE1 (ID) VALUES (11) INSERT INTO TABLE2 (ID) VALUES ('b') INSERT INTO TABLE3 (ID) VALUES (33) COMMIT GO -- Execute SP EXEC TestSPTran GO -- Check the Values in Tables SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO -- Clean up DROP TABLE Table1 DROP TABLE Table2 DROP TABLE Table3 GO In this case, there will be no entries in any part of the table. What is your opinion about this blog post? Please leave your comments about it here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • SQL SERVER – CTE can be Updated

    - by Pinal Dave
    Today I have received a fantastic email from Matthew Spieth. SQL Server expert from Ohio. He recently had a great conversation with his colleagues in the office and wanted to make sure that everybody who reads this blog knows about this little feature which is commonly confused. Here is his statement and we will start our story with Matthew’s own statement: “Users often confuse CTE with Temp Table but technically they both are different, CTE are like Views and they can be updated just like views.“ Very true statement from Matthew. I totally agree with what he is saying. Just like him, I have enough, time came across a situation when developers think CTE is like temp table. When you update temp table, it remains in the scope of the temp table and it does not propagate it to the table based on which temp table is built. However, this is not the case when it is about CTE, when you update CTE, it updates underlying table just like view does. Here is the working example of the same built by Matthew to illustrate this behavior. Check the value in the base table first. USE AdventureWorks2012; -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; Now let us build CTE with the same data. ;WITH CTEUpd(ProductID, Name, ProductNumber, Color) AS( SELECT ProductID, Name, ProductNumber, Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738') Now let us update CTE with following code. -- Update CTE UPDATE CTEUpd SET Color = 'Rainbow'; Now let us check the BASE table based on which the CTE was built. -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; That’s it! You can update CTE and it will update the base table. Here is the script which you should execute all together. USE AdventureWorks2012; -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; -- Build CTE ;WITH CTEUpd(ProductID, Name, ProductNumber, Color) AS( SELECT ProductID, Name, ProductNumber, Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738') -- Update CTE UPDATE CTEUpd SET Color = 'Rainbow'; -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; If you are aware of such scenario, do let me know and I will post this on my blog with due credit to you. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL View, T SQL Tagged: CTE

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 2

    - by shiju
    In my previous post, we have discussed on how to work with RavenDB document database in an ASP.NET MVC application. We have setup RavenDB for our ASP.NET MVC application and did basic CRUD operations against a simple domain entity. In this post, let’s discuss on domain entity with deep object graph and how to query against RavenDB documents using Indexes.Let's create two domain entities for our demo ASP.NET MVC appplication  public class Category {       public string Id { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public List<Expense> Expenses { get; set; }       public Category()     {         Expenses = new List<Expense>();     } }    public class Expense {       public string Id { get; set; }     public Category Category { get; set; }     public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }   }  We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category.Let's create  ASP.NET MVC view model  for Expense transaction public class ExpenseViewModel {     public string Id { get; set; }       public string CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]            public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]            public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } Let's create a contract type for Expense Repository  public interface IExpenseRepository {     Expense Load(string id);     IEnumerable<Expense> GetExpenseTransactions(DateTime startDate,DateTime endDate);     void Save(Expense expense,string categoryId);     void Delete(string id);  } Let's create a concrete type for Expense Repository for handling CRUD operations. public class ExpenseRepository : IExpenseRepository {   private IDocumentSession session; public ExpenseRepository() {         session = MvcApplication.CurrentSession; } public Expense Load(string id) {     return session.Load<Expense>(id); } public IEnumerable<Expense> GetExpenseTransactions(DateTime startDate, DateTime endDate) {             //Querying using the Index name "ExpenseTransactions"     //filtering with dates     var expenses = session.LuceneQuery<Expense>("ExpenseTransactions")         .WaitForNonStaleResults()         .Where(exp => exp.Date >= startDate && exp.Date <= endDate)         .ToArray();     return expenses; } public void Save(Expense expense,string categoryId) {     var category = session.Load<Category>(categoryId);     if (string.IsNullOrEmpty(expense.Id))     {         //new expense transaction         expense.Category = category;         session.Store(expense);     }     else     {         //modifying an existing expense transaction         var expenseToEdit = Load(expense.Id);         //Copy values to  expenseToEdit         ModelCopier.CopyModel(expense, expenseToEdit);         //set category object         expenseToEdit.Category = category;       }     //save changes     session.SaveChanges(); } public void Delete(string id) {     var expense = Load(id);     session.Delete<Expense>(expense);     session.SaveChanges(); }   }  Insert/Update Expense Transaction The Save method is used for both insert a new expense record and modifying an existing expense transaction. For a new expense transaction, we store the expense object with associated category into document session object and load the existing expense object and assign values to it for editing a existing record.  public void Save(Expense expense,string categoryId) {     var category = session.Load<Category>(categoryId);     if (string.IsNullOrEmpty(expense.Id))     {         //new expense transaction         expense.Category = category;         session.Store(expense);     }     else     {         //modifying an existing expense transaction         var expenseToEdit = Load(expense.Id);         //Copy values to  expenseToEdit         ModelCopier.CopyModel(expense, expenseToEdit);         //set category object         expenseToEdit.Category = category;       }     //save changes     session.SaveChanges(); } Querying Expense transactions   public IEnumerable<Expense> GetExpenseTransactions(DateTime startDate, DateTime endDate) {             //Querying using the Index name "ExpenseTransactions"     //filtering with dates     var expenses = session.LuceneQuery<Expense>("ExpenseTransactions")         .WaitForNonStaleResults()         .Where(exp => exp.Date >= startDate && exp.Date <= endDate)         .ToArray();     return expenses; }  The GetExpenseTransactions method returns expense transactions using a LINQ query expression with a Date comparison filter. The Lucene Query is using a index named "ExpenseTransactions" for getting the result set. In RavenDB, Indexes are LINQ queries stored in the RavenDB server and would be  executed on the background and will perform query against the JSON documents. Indexes will be working with a lucene query expression or a set operation. Indexes are composed using a Map and Reduce function. Check out Ayende's blog post on Map/Reduce We can create index using RavenDB web admin tool as well as programmitically using its Client API. The below shows the screen shot of creating index using web admin tool. We can also create Indexes using Raven Cleint API as shown in the following code documentStore.DatabaseCommands.PutIndex("ExpenseTransactions",     new IndexDefinition<Expense,Expense>() {     Map = Expenses => from exp in Expenses                     select new { exp.Date } });  In the Map function, we used a Linq expression as shown in the following from exp in docs.Expensesselect new { exp.Date };We have not used a Reduce function for the above index. A Reduce function is useful while performing aggregate functions based on the results from the Map function. Indexes can be use with set operations of RavenDB.SET OperationsUnlike other document databases, RavenDB supports set based operations that lets you to perform updates, deletes and inserts to the bulk_docs endpoint of RavenDB. For doing this, you just pass a query to a Index as shown in the following commandDELETE http://localhost:8080/bulk_docs/ExpenseTransactions?query=Date:20100531The above command using the Index named "ExpenseTransactions" for querying the documents with Date filter and  will delete all the documents that match the query criteria. The above command is equivalent of the following queryDELETE FROM ExpensesWHERE Date='2010-05-31' Controller & ActionsWe have created Expense Repository class for performing CRUD operations for the Expense transactions. Let's create a controller class for handling expense transactions.   public class ExpenseController : Controller { private ICategoryRepository categoyRepository; private IExpenseRepository expenseRepository; public ExpenseController(ICategoryRepository categoyRepository, IExpenseRepository expenseRepository) {     this.categoyRepository = categoyRepository;     this.expenseRepository = expenseRepository; } //Get Expense transactions based on dates public ActionResult Index(DateTime? StartDate, DateTime? EndDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!StartDate.HasValue)     {         StartDate = new DateTime(dtNow.Year, dtNow.Month, 1);         EndDate = StartDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of startdate's month, if endate is not passed     if (StartDate.HasValue && !EndDate.HasValue)     {         EndDate = (new DateTime(StartDate.Value.Year, StartDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }       var expenses = expenseRepository.GetExpenseTransactions(StartDate.Value, EndDate.Value);     if (Request.IsAjaxRequest())     {           return PartialView("ExpenseList", expenses);     }     ViewData.Add("StartDate", StartDate.Value.ToShortDateString());     ViewData.Add("EndDate", EndDate.Value.ToShortDateString());             return View(expenses);            }   // GET: /Expense/Edit public ActionResult Edit(string id) {       var expenseModel = new ExpenseViewModel();     var expense = expenseRepository.Load(id);     ModelCopier.CopyModel(expense, expenseModel);     var categories = categoyRepository.GetCategories();     expenseModel.Category = categories.ToSelectListItems(expense.Category.Id.ToString());                    return View("Save", expenseModel);          }   // // GET: /Expense/Create   public ActionResult Create() {     var expenseModel = new ExpenseViewModel();               var categories = categoyRepository.GetCategories();     expenseModel.Category = categories.ToSelectListItems("-1");     expenseModel.Date = DateTime.Today;     return View("Save", expenseModel); }   // // POST: /Expense/Save // Insert/Update Expense Tansaction [HttpPost] public ActionResult Save(ExpenseViewModel expenseViewModel) {     try     {         if (!ModelState.IsValid)         {               var categories = categoyRepository.GetCategories();                 expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);                               return View("Save", expenseViewModel);         }           var expense=new Expense();         ModelCopier.CopyModel(expenseViewModel, expense);          expenseRepository.Save(expense, expenseViewModel.CategoryId);                       return RedirectToAction("Index");     }     catch     {         return View();     } } //Delete a Expense Transaction public ActionResult Delete(string id) {     expenseRepository.Delete(id);     return RedirectToAction("Index");     }     }     Download the Source - You can download the source code from http://ravenmvc.codeplex.com

    Read the article

  • Wireless will not connect

    - by azz0r
    Hello, I have installed Ubuntu 10.10 on the same machine as my windows setup. However, it will not connect to my wireless network. It can see its there, it can attempt to connect, yet it will never connect. It will keep bringing up the password prompt everyso often. I have tried turning my security to WEP, I ended up turning it back to WPA2. It is set to AES (noted a few threads on google about that). Can you assist? I would love to dive into Ubuntu, but without the internet its pointless. --- lshw -C network --- *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 02 serial: 00:1d:92:ea:cc:62 capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8168 driverversion=8.020.00-NAPI duplex=half latency=0 link=no multicast=yes port=twisted pair resources: irq:29 ioport:e800(size=256) memory:feaff000-feafffff memory:f8ff0000-f8ffffff(prefetchable) memory:feac0000-feadffff(prefetchable) *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: 00:15:af:72:a4:38 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bgn --- iwconfig ---- lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Wuggawoo" Mode:Managed Frequency:2.437 GHz Access Point: Not-Associated Tx-Power=9 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on --- cat /etc/network/interfaces ---- auto lo iface lo inet loopback logs deamon.log --- Jan 19 04:17:09 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:17:09 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:09 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:11 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:11 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:11 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): association took too long. Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 5 -> 6 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): asking for new secrets Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) scheduled... Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) started... Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 6 -> 4 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) scheduled... Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) complete. Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) starting... Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 4 -> 5 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): connection 'Wuggawoo' has security, and secrets exist. No new secrets needed. Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'ssid' value 'Wuggawoo' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'scan_ssid' value '1' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'key_mgmt' value 'WPA-PSK' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'psk' value '<omitted>' Jan 19 04:17:12 ubuntu NetworkManager: nm_setting_802_1x_get_pkcs11_engine_path: assertion `NM_IS_SETTING_802_1X (setting)' failed Jan 19 04:17:12 ubuntu NetworkManager: nm_setting_802_1x_get_pkcs11_module_path: assertion `NM_IS_SETTING_802_1X (setting)' failed Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) complete. Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: set interface ap_scan to 1 Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:13 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:13 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:13 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:23 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:17:23 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:23 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:24 ubuntu AptDaemon: INFO: Initializing daemon Jan 19 04:17:25 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:25 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:25 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:27 ubuntu NetworkManager: <info> wlan0: link timed out. --- kern.log --- Jan 19 04:18:11 ubuntu kernel: [ 142.420024] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out Jan 19 04:18:13 ubuntu kernel: [ 144.333847] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:13 ubuntu kernel: [ 144.539996] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:13 ubuntu kernel: [ 144.750027] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:14 ubuntu kernel: [ 144.940022] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out Jan 19 04:18:25 ubuntu kernel: [ 155.832995] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:25 ubuntu kernel: [ 156.030046] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:25 ubuntu kernel: [ 156.230039] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:25 ubuntu kernel: [ 156.430039] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out --- syslog --- Jan 19 04:18:46 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:18:46 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:18:46 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:18:48 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:18:48 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:18:48 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:18:48 ubuntu kernel: [ 178.833905] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:48 ubuntu kernel: [ 179.030035] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:48 ubuntu kernel: [ 179.230020] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:48 ubuntu kernel: [ 179.433634] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out lspci and lsusb lspci -- 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:02.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (ext gfx port 0) 00:05.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 1) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3a) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:00.0 VGA compatible controller: nVidia Corporation G80 [GeForce 8800 GTS] (rev a2) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) 03:00.0 FireWire (IEEE 1394): JMicron Technology Corp. IEEE 1394 Host Controller -- lsusb -- Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 003: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Bus 004 Device 002: ID 045e:0730 Microsoft Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 003: ID 13d3:3247 IMC Networks 802.11 n/g/b Wireless LAN Adapter Bus 002 Device 002: ID 0718:0628 Imation Corp. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 046d:08c2 Logitech, Inc. QuickCam PTZ Bus 001 Device 002: ID 0424:2228 Standard Microsystems Corp. 9-in-2 Card Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub With no security on my router I still can't connect, I get: Jan 19 15:58:01 ubuntu wpa_supplicant[1165]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 15:58:01 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 15:58:01 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: WPS-AP-AVAILABLE Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: Association request to the driver failed Jan 19 15:58:02 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 15:58:05 ubuntu NetworkManager: <info> wlan0: link timed out. Jan 19 15:58:07 ubuntu wpa_supplicant[1165]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 15:58:07 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 15:58:07 ubuntu NetworkManager: <info> (wlan0): supplicant connec

    Read the article

  • JMX Based Monitoring - Part One

    - by Anthony Shorten
    In all versions of the Oracle Utilities Application Framework there is an ability to use Java Management eXtensions (JMX) to both manage and monitor the various components of the product. This means that sites can use a JSR120 compliant JMX browser or JMX console to view or manage the components of the product with little or no configuration required. In each version we have progressively added JMX capabilities to allow IT groups more detailed information. In Oracle Utilities Application Framework V2.1 and above it was possible to use JMX on the Web Application Server provided Mbeans to allow you to monitor the online component of the product as well as manage the configuration. Also with a few additional java options it is possible to get a good level of detail about the Java Virtual machine including memory and thread usage. In Oracle Utilities Application Framework V2.2 and above, we added support for Java 5 statistics (Java enabled them by default), database pool statistics and also added the ability to manage and moinitor the batch component of the architecture. Now, in Oracle Utilities Application Framework V4 and above, we added support for Java 6 MXBeans, online management of the cache using JMX, additional JVM information and Performance monitoring using JMX. JMX allows the product to be managed from a common console such as Oracle Enterprise Manager, Tivoli, HP OpenView (and a lot more). Over the next week or so I will be compiling a set of blog entries discussing what is available (in summary format) using JMX and how to get access to the JMX statistics for your version of the product.

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Introduction to Oracle’s New StorageTek SL150 Modular Tape Library

    - by Cinzia Mascanzoni
    Join the product announcement webcast on Thursday July 12, 2012 at 3pm CET (2pm GMT). This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast.Register NOW!

    Read the article

  • Review: ComponentOne Studio for Entity Framework

    - by Tim Murphy
    While I have always been a fan of libraries that improve coding efficiency and reduce code redundancy I have mostly been using ones that were in the public domain.  As part of the Geeks With Blogs Influencers program a got my hands on ComponentOne’s Studio for Entity Framework.  Below are my thought after working with the product for several weeks. My coding preference has always been maintainable code that is reusable across an enterprises protfolio.  Because of this my focus in reviewing this product is less on the RAD components and more on its benefits for layered applications using code first Entity Framework. Before we get into the pros and cons here is a summary of the main feature listed for SEF. Unified Data Context Virtual Data Access More Powerful Data Binding Pros The first thing that I found to my liking is the C1DataSource. It basically manages a cache for your Entity Model context.  Under RAD conditions this is setup automatically when you drop the object on a your design surface.  If you are like me and want to abstract you data management into a library it takes a little more work, but it is still acceptable and gains the same benefits. The second feature that I found beneficial is the definition of views with improved sorting and filtering.  Again the ease of use of these features is greater on the RAD side but no capabilities are missing when manipulating object in code. Linq has become my friend over the last couple of years and it was great to see that ComponentOne had ensured that it remained a first class citizen in their design.  When you look into this product yourself I would suggest taking a dive into LiveLinq which allow the joining of different data source types. As I went through discovering the features of this framework I appreciated the number of examples that they supplied for different uses.  Besides showing how to use SEF with WinForms, WPF and Silverlight they also showed how to accomplish tasks both RAD, code only and MVVM approaches. Cons The only area that I would really like to see improvement is in there level of detail in their documentation.  Specifically I would like to have seen some of the supporting code explained, such as what some supporting object did, in the examples instead of having to go to the programmer’s reference. I did find some times where currently existing projects had some trouble determining scope that the RAD controls were allowed, but I expect this is something that is in part end user related. Summary Overall I found the Studio for Entity Framework capable and well thought out.  If you are already using the Entity Framework this product will fit into your environment with little effort in return for greater flexibility and greater robustness in your solutions. Whether the $895 list price for a standard version works for you will depend on your return on investment. Smaller companies with only a small number of projects may not be able to stomach it, you get a full featured product that is supported by a well established company.  The more projects and the more code you have the greater your return on investment will be. Personally I intend to apply this product to some production systems and will probably have some tips and tricks in the future. del.icio.us Tags: ComponentOne,Studio for Entity Framework,Geeks With Blogs,Influencers,Product Reviews

    Read the article

  • ASP.NET MVC CRUD Validation

    - by Ricardo Peres
    One thing I didn’t refer on my previous post on ASP.NET MVC CRUD with AJAX was how to retrieve model validation information into the client. We want to send any model validation errors to the client in the JSON object that contains the ProductId, RowVersion and Success properties, specifically, if there are any errors, we will add an extra Errors collection property. Here’s how: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.ModelState.IsValid == true) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: Dictionary<String, String> errors = new Dictionary<String, String>(); 29:  30: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 31: { 32: String key = keyValue.Key; 33: ModelState modelState = keyValue.Value; 34:  35: foreach (ModelError error in modelState.Errors) 36: { 37: errors[key] = error.ErrorMessage; 38: } 39: } 40:  41: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty, Errors = errors })); 42: } 43: } As for the view, we need to change slightly the onSuccess JavaScript handler on the Single view: 1: function onSuccess(ctx) 2: { 3: if (typeof (ctx.Success) != 'undefined') 4: { 5: $('input#ProductId').val(ctx.ProductId); 6: $('input#RowVersion').val(ctx.RowVersion); 7:  8: if (ctx.Success == false) 9: { 10: var errors = ''; 11:  12: if (typeof (ctx.Errors) != 'undefined') 13: { 14: for (var key in ctx.Errors) 15: { 16: errors += key + ': ' + ctx.Errors[key] + '\n'; 17: } 18:  19: window.alert('An error occurred while updating the entity: the model contained the following errors.\n\n' + errors); 20: } 21: else 22: { 23: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 24: } 25: } 26: else 27: { 28: window.alert('Saved successfully'); 29: } 30: } 31: else 32: { 33: if (window.confirm('Not logged in. Login now?') == true) 34: { 35: document.location.href = '<% 1: : FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 36: } 37: } 38: } The logic is as this: If the Edit action method is called for a new entity (the ProductId is 0) and it is valid, the entity is saved, and the JSON results contains a Success flag set to true, a ProductId property with the database-generated primary key and a RowVersion with the server-generated ROWVERSION; If the model is not valid, the JSON result will contain the Success flag set to false and the Errors collection populated with all the model validation errors; If the entity already exists in the database (ProductId not 0) and the model is valid, but the stored ROWVERSION is different that the one on the view, the result will set the Success property to false and will return the current (as loaded from the database) value of the ROWVERSION on the RowVersion property. On a future post I will talk about the possibilities that exist for performing model validation, stay tuned!

    Read the article

  • Innovating with Customer Needs Management

    - by Anurag Batra
    We're pleased to announce the addition of Agile Customer Needs Management (CNM) to the portfolio of PLM offerings by Oracle. CNM allows manufacturing companies to capture the voice of the customer and market, and arm their product designers with the information that they need to better meet customer requirements. It's an Enterprise 2.0 product that focuses on the quick information capture, ease of organizing information and association of that information with the product record - some of the key aspects of early stage innovation. Read on to learn more about this revolutionary new product that redefines how information is used to drive innovation.

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • Configure PERL DBI and DBD in Linux

    - by Balualways
    I am new to Perl and I work in a Linux OEL 5x server. I am trying to configure the Perl DB modules for Oracle connectivity (DBD and DBI modules). Can anyone help me out in the installation procedure? I had tried CPAN didn't really worked out. Any help would be appreciated. I am not quite sure I need to initialize any variables other than $LD_LIBRARY_PATH and $ORACLE_HOME These are my observations: ISSUE:: I am getting the following issue while using the DBI module to connect to Oracle: install_driver(Oracle) failed: Can't locate loadable object for module DBD::Oracle in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a module that DBD::Oracle requires hasn't been fully installed at connectdb.pl line 57 I had installed the DBD for oracle from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD/DBD-Oracle-1.50 Could you please take a look into the steps and correct me if I am wrong: Observations: $ echo $LD_LIBRARY_PATH /opt/CA/UnicenterAutoSysJM/autosys/lib:/opt/CA/SharedComponents/Csam/SockAdapter/lib:/opt/CA/SharedComponents/ETPKI/lib:/opt/CA/CAlib $ echo $ORACLE_HOME /usr/local/oracle/ORA This is how I tried to install the DBD module: Download the file DBD 1.50 for Oracle Copy to /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD Untar and Makefile.PL . Message: Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Configuring DBD::Oracle for perl 5.008008 on linux (x86_64-linux-thread-multi) Remember to actually *READ* the README file! Especially if you have any problems. Installing on a linux, Ver#2.6 Using Oracle in /opt/oracle/product/10.2 DEFINE _SQLPLUS_RELEASE = "1002000400" (CHAR) Oracle version 10.2.0.4 (10.2) Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms64.mk Found /opt/oracle/product/10.2/rdbms/lib/ins_rdbms.mk Using /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Your LD_LIBRARY_PATH env var is set to '/usr/local/oracle/ORA/lib:/usr/dt/lib:/usr/openwin/lib:/usr/local/oracle/ORA/ows/cartx/wodbc/1.0/util/lib:/usr/local/oracle/ORA/lib:/usr/local/sybase/OCS-12_0/lib:/usr/local/sybase/lib:/home/oracle/jdbc/jdbcoci73/lib:./' WARNING: Your LD_LIBRARY_PATH env var doesn't include '/opt/oracle/product/10.2/lib' but probably needs to. Reading /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Reading /usr/local/oracle/ORA/rdbms/lib/env_rdbms.mk Attempting to discover Oracle OCI build rules sh: make: command not found by executing: [make -f /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk build ECHODO=echo ECHO=echo GENCLNTSH='echo genclntsh' CC=true OPTIMIZE= CCFLAGS= EXE=DBD_ORA_EXE OBJS=DBD_ORA_OBJ.o] WARNING: Oracle build rule discovery failed (32512) Add path to make command into your PATH environment variable. Oracle oci build prolog: [sh: make: command not found] Oracle oci build command: [] WARNING: Unable to interpret Oracle build commands from /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk. (Will continue by using fallback approach.) Please report this to [email protected]. See README for what to include. Found header files in /opt/oracle/product/10.2/rdbms/public. client_version=10.2 DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\"10.2.0.4\" -DORA_OCI_102 Checking for functioning wait.ph System: perl5.008008 linux ca-build9.us.oracle.com 2.6.20-1.3002.fc6xen #1 smp thu apr 30 18:08:39 pdt 2009 x86_64 x86_64 x86_64 gnulinux Compiler: gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm Linker: not found Sysliblist: -ldl -lm -lpthread -lnsl -lirc Oracle makefiles would have used these definitions but we override them: CC: cc CFLAGS: $(GFLAG) $(OPTIMIZE) $(CDEBUG) $(CCFLAGS) $(PFLAGS)\ $(SHARED_CFLAG) $(USRFLAGS) [$(GFLAG) -O3 $(CDEBUG) -m32 $(TRIGRAPHS_CCFLAGS) -fPIC -I/usr/local/oracle/ORA/rdbms/demo -I/usr/local/oracle/ORA/rdbms/public -I/usr/local/oracle/ORA/plsql/public -I/usr/local/oracle/ORA/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -fno-strict-aliasing $(LPFLAGS) $(USRFLAGS)] build: $(CC) $(ORALIBPATH) -o $(EXE) $(OBJS) $(OCISHAREDLIBS) [ cc -L$(LIBHOME) -L/usr/local/oracle/ORA/rdbms/lib/ -o $(EXE) $(OBJS) -lclntsh $(EXPDLIBS) $(EXOSLIBS) -ldl -lm -lpthread -lnsl -lirc -ldl -lm $(USRLIBS) -lpthread] LDFLAGS: $(LDFLAGS32) [-m32 -o $@ -L/usr/local/oracle/ORA/rdbms//lib32/ -L/usr/local/oracle/ORA/lib32/ -L/usr/local/oracle/ORA/lib32/stubs/] Linking with /usr/local/oracle/ORA/rdbms/lib/defopt.o -lclntsh -ldl -lm -lpthread -lnsl -lirc -ldl -lm -lpthread [from $(DEF_OPT) $(OCISHAREDLIBS)] Checking if your kit is complete... Looks good LD_RUN_PATH=/usr/local/oracle/ORA/lib Using DBD::Oracle 1.50. Using DBD::Oracle 1.50. Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Writing Makefile for DBD::Oracle Writing MYMETA.yml and MYMETA.json *** If you have problems... read all the log printed above, and the README and README.help.txt files. (Of course, you have read README by now anyway, haven't you?)

    Read the article

  • List of available whitepapers as at 04 May 2010

    - by Anthony Shorten
    The following table lists the whitepapers available, from My Oracle Support, for any Oracle Utilities Application Framework based product: KB Id Document Title Contents 559880.1 ConfigLab Design Guidelines Whitepaper outlining how to design and implement a ConfigLab solution. 560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers.  560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items.  Release Management - Packaging configuration items into a release.  Distribution - Distribution and installation of  releases across environments  Change Management - Generic change management processes for product implementations. Status Accounting -Status reporting techniques using product facilities.  Defect Management -Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview Whitepaper summarizing the security facilities in the framework. Updated for OUAF 4.0.1 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Whitepaper summarizing how to integrate an external LDAP based security repository with the framework.  789060.1 Oracle Utilities Application Framework Integration Overview Whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products Whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper oulines the common and best practices implemented by sites all over the world. Updated for OUAF 4.0.1 856854.1 Technical Best Practices V1 Addendum  Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Updated for OUAF 4.0.1 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Orade Identity Manager used to provision user and user group secuity information 1068958.1 Production Environment Configuration Guidelines (New!) Whitepaper outlining common production level settings for the products

    Read the article

  • Anything Mouse Like Isn't Seen by Ubuntu 13.10

    - by belkinsa
    I have a upgraded 13.10 from 13.04 and today this issue of anything that is mouse-like is not seen by Ubuntu. The only input device that works is my Wacom Tablet and it's pen. I already checked for updates and have the latest updates for the Ubuntu system. Here is the lsusb output: Bus 002 Device 002: ID 04f2:b209 Chicony Electronics Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 056a:00b8 Wacom Co., Ltd Intuos4 4x6 Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub This is issue a bug or how to fix it? I can't even use the Live USB of Ubuntu 13.10 to fix the issue. EDIT 1: Seems that Ubuntu isn't seeing the clicks done by a mouse if it's the Wacom tablet. EDIT 2: I think it's a bug but I need help to report it.

    Read the article

  • Updated Technical Best Practices whitepaper

    - by ACShorten
    The Technical Best Practices whitepaper has been updated with the latest advice. This edition of the whitepaper covers advice from our internal management team from the product group that manages our environments. Our product teams manage over 1500+ copies of the product, covering every version, every platform and every phase of our development, testing and production product development cycle. The technical team managing that group of environments has compiled some additional advice that has been incorporated into the Technical Best Practices and other whitepapers (inclusding Performance Troubleshooting and the Software Configuration Management Series). New advice includes new installation advice, advanced settings, new security settings and advice for both Oracle WebLogic and IBM WebSphere installations. The Technical Best Practices whitepaper is available from My Oracle Support at Doc Id: 560367.1. To assist readers of past editions of the whitepaper, new or updated advice is marked with an appropriate graphic.

    Read the article

  • What is the maximum number of characters in the utm_content param in GA?

    - by David Parks
    For example, we want to differentiate people who followed our daily product email blast. I could use the product ID for utm_content, but it would be easier to read to use the SEO friendly URL path, such as: http://www.oursite.com/products/really-great-new-product https://www.frugg.com/? utm_source=a&utm_medium=b& utm_term=c& utm_content=Can-I-use-a-really-long-content-tag-like-this-one-or-is-this-going-to-break-something& utm_campaign=d

    Read the article

  • DKIM, SPF, PTR records are not working properly with my domain

    - by shihon
    I configured my server and well authenticate email system with DKIM key, SPF record and PTR records, when i start to sent out mails from phplist interface to my users ~50000, my domain is spammed by google. In headers, signed by and mailed by tag shows by my domain : appmail.co, I also test my domain via check mail provide by port25, report is: This message is an automatic response from Port25's authentication verifier service at verifier.port25.com. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at . Thank you for using the verifier, The Port25 Solutions, Inc. team ========================================================== Summary of Results SPF check: pass DomainKeys check: neutral DKIM check: pass Sender-ID check: pass SpamAssassin check: ham ========================================================== Details: HELO hostname: app.appmail.co Source IP: 108.179.192.148 mail-from: [email protected] SPF check details: Result: pass ID(s) verified: [email protected] DNS record(s): appmail.co. SPF (no records) appmail.co. 14400 IN TXT "v=spf1 +a +mx +ip4:108.179.192.148 ?all" appmail.co. 14400 IN A 108.179.192.148 DomainKeys check details: Result: neutral (message not signed) ID(s) verified: [email protected] DNS record(s): DKIM check details: Result: pass (matches From: [email protected]) ID(s) verified: header.d=appmail.co Canonicalized Headers: content-type:multipart/alternative;'20'boundary=047d7b2eda75d8544d04c17b6841'0D''0A' to:[email protected]'0D''0A' from:shashank'20'sharma'20'<[email protected]>'0D''0A' subject:Test'0D''0A' message-id:<CADnDhbH9aDBk3Ho2-CrG7gwOoD6RNX0sFq4bWL64+kmo=9HjWg@mail.gmail.com>'0D''0A' date:Sat,'20'2'20'Jun'20'2012'20'16:44:50'20'+0530'0D''0A' mime-version:1.0'0D''0A' dkim-signature:v=1;'20'a=rsa-sha256;'20'q=dns/txt;'20'c=relaxed/relaxed;'20'd=appmail.co;'20's=default;'20'h=Content-Type:To:From:Subject:Message-ID:Date:MIME-Version;'20'bh=GS6uwlT+weKcrrLJ2I+cjBtWPq9nvhwRlNAJebOiQOk=;'20'b=; Canonicalized Body: --047d7b2eda75d8544d04c17b6841'0D''0A' Content-Type:'20'text/plain;'20'charset=UTF-8'0D''0A' '0D''0A' Hello'20'Senders'0D''0A' '0D''0A' --047d7b2eda75d8544d04c17b6841'0D''0A' Content-Type:'20'text/html;'20'charset=UTF-8'0D''0A' '0D''0A' Hello'20'Senders'0D''0A' '0D''0A' --047d7b2eda75d8544d04c17b6841--'0D''0A' DNS record(s): default._domainkey.appmail.co. 14400 IN TXT "v=DKIM1; k=rsa; p=MHwwDQYJKoZIhvcNAQEBBQADawAwaAJhALGCOdMeZRxRHoatH7/KCvI1CKS0wOOsTAq0LLgPsOpMolifpVQDKOWT2zq/6LHVmDVjXLbnWO2d4ry/riy7ei66pLpnAV5ceIUSjBRusI8jcF9CZhPrh/OImsKVUb9ceQIDAQAB;" NOTE: DKIM checking has been performed based on the latest DKIM specs (RFC 4871 or draft-ietf-dkim-base-10) and verification may fail for older versions. If you are using Port25's PowerMTA, you need to use version 3.2r11 or later to get a compatible version of DKIM. Sender-ID check details: Result: pass ID(s) verified: [email protected] DNS record(s): appmail.co. SPF (no records) appmail.co. 14400 IN TXT "v=spf1 +a +mx +ip4:108.179.192.148 ?all" appmail.co. 14400 IN A 108.179.192.148 SpamAssassin check details: SpamAssassin v3.3.1 (2010-03-16) Result: ham (-0.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain 0.0 HTML_MESSAGE BODY: HTML included in message -0.5 BAYES_05 BODY: Bayes spam probability is 1 to 5% [score: 0.0288] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.5 SINGLE_HEADER_1K A single header contains 1K-2K characters ========================================================== Original Email Return-Path: <[email protected]> Received: from app.appmail.co (108.179.192.148) by verifier.port25.com id hp7qqo11u9cc for <[email protected]>; Sat, 2 Jun 2012 07:14:52 -0400 (envelope-from <[email protected]>) Authentication-Results: verifier.port25.com; spf=pass [email protected] Authentication-Results: verifier.port25.com; domainkeys=neutral (message not signed) [email protected] Authentication-Results: verifier.port25.com; dkim=pass (matches From: [email protected]) header.d=appmail.co Authentication-Results: verifier.port25.com; sender-id=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=appmail.co; s=default; h=Content-Type:To:From:Subject:Message-ID:Date:MIME-Version; bh=GS6uwlT+weKcrrLJ2I+cjBtWPq9nvhwRlNAJebOiQOk=;b=pNw3UQNMoNyZ2Ujv8omHGodKVu/55S8YdBEsA5TbRciga/H7f+5noiKvo60vU6oXYyzVKeozFHDoOEMV6m5UTgkdBefogl+9cUIbt5CSrTWA97D7tGS97JblTDXApbZH; Received: from mail-pb0-f46.google.com ([209.85.160.46]:57831) by app.appmail.co with esmtpa (Exim 4.77) (envelope-from <[email protected]>) id 1SamIF-00055f-Om for [email protected]; Sat, 02 Jun 2012 16:44:51 +0530 Received: by pbbrp8 with SMTP id rp8so4165728pbb.5 for <[email protected]>; Sat, 02 Jun 2012 04:14:51 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.216.33 with SMTP id on1mr19414885pbc.105.1338635690988; Sat, 02 Jun 2012 04:14:50 -0700 (PDT) Received: by 10.143.66.13 with HTTP; Sat, 2 Jun 2012 04:14:50 -0700 (PDT) Date: Sat, 2 Jun 2012 16:44:50 +0530 Message-ID: <CADnDhbH9aDBk3Ho2-CrG7gwOoD6RNX0sFq4bWL64+kmo=9HjWg@mail.gmail.com> Subject: Test From: shashank sharma <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=047d7b2eda75d8544d04c17b6841 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - app.appmail.co X-AntiAbuse: Original Domain - verifier.port25.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - appmail.co --047d7b2eda75d8544d04c17b6841 Content-Type: text/plain; charset=UTF-8 Hello Senders --047d7b2eda75d8544d04c17b6841 Content-Type: text/html; charset=UTF-8 Hello Senders --047d7b2eda75d8544d04c17b6841-- I also tried to send mail on yahoo , rediff but i get mails in spam. Please help me to sort out this issue

    Read the article

  • DragonRise USB Gamepad not working

    - by Gaurav Butola
    I have a Gamepad which is not working, I say "not working" because I was playing Urban Terror and the game was not responding to the gamepad button presses. How do I get the gamepad to work? I tried it in some other games Torcs, SuperTuxKark, Enemy Territory.... but, Same, there is no response to any of the gamepad button presses. Here is the output of lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 0079:0011 DragonRise Inc. Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 003 on third line is my Gamepad.

    Read the article

  • Instant database snapshot

    - by raj
    My product uses oracle 9 database in its backend. every week the new release of the product is launched which will want to fire some DML, DDL queries to the database. I usually test the product release in a dummy database before applying it in the main database. I create a database dump using exp command, then import them into dummy database using imp. then i test the product in the dummy database and checks if there are any errors. This exp and imp takes about 3 hours to complete. Is there any alternative as : instant snapshot of the live database (which will be independent of the live one)? or is there any option to keep dummydatabase in sync with the originl database always. Yhis can be done by making the product firing DML&DDL queries to both the databases.. but this will be a HUGE performance problem.. how can i overcome this?

    Read the article

  • Vote of Disconfidence to Entity Framework

    - by Ricardo Peres
    A friend of mine has found the following problem with Entity Framework 4: Two simple classes and one association between them (one to many): One condition to filter out soft-deleted entities (WHERE Deleted = 0): 100 records in the database; A simple query: 1: var l = ctx.Person.Include("Address").Where(x => (x.Address.Name == "317 Oak Blvd." && x.Address.Number == 926) || (x.Address.Name == "891 White Milton Drive" && x.Address.Number == 497)); Will produce the following SQL: 1: SELECT 2: [Extent1].[Id] AS [Id], 3: [Extent1].[FullName] AS [FullName], 4: [Extent1].[AddressId] AS [AddressId], 5: [Extent202].[Id] AS [Id1], 6: [Extent202].[Name] AS [Name], 7: [Extent202].[Number] AS [Number] 8: FROM [dbo].[Person] AS [Extent1] 9: LEFT OUTER JOIN [dbo].[Address] AS [Extent2] ON ([Extent2].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent2].[Id]) 10: LEFT OUTER JOIN [dbo].[Address] AS [Extent3] ON ([Extent3].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent3].[Id]) 11: LEFT OUTER JOIN [dbo].[Address] AS [Extent4] ON ([Extent4].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent4].[Id]) 12: LEFT OUTER JOIN [dbo].[Address] AS [Extent5] ON ([Extent5].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent5].[Id]) 13: LEFT OUTER JOIN [dbo].[Address] AS [Extent6] ON ([Extent6].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent6].[Id]) 14: ... 15: WHERE ((N'317 Oak Blvd.' = [Extent2].[Name]) AND (926 = [Extent3].[Number])) 16: ... And will result in 680 MB of memory being taken! Now, Entity Framework has been historically known for producing less than optimal SQL, but 680 MB for 100 entities?! According to Microsoft, the problem will be addressed in the following version, there is a Connect issue open. There is even a whitepaper, Performance Considerations for Entity Framework 5, which talks about some of the changes and optimizations coming on version 5, but by reading it, I got even more concerned: “Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute) sweeps the cache.” Say what?! The next version of Entity Framework will spawn timer threads?! When Code First came along, I thought it was a step in the right direction. Sure, it didn’t include some things that NHibernate did for quite some time – for example, different strategies for Id generation that do not rely on IDENTITY columns, which makes INSERT batching impossible, or support for enumerated types – but I thought these would come with the time. Now, enumerated types have, but so did… timer threads! I’m afraid Entity Framework is becoming a monster.

    Read the article

  • Partner WebCasts: EMEA Alliances and Channels Hardware Webinars, July 2012

    - by rituchhibber
    Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC! 3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library 3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe Facchetti EMEA Partner Business Development Manager, Oracle Hardware Sales Sasan Moaveni EMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • Is it wise to store a big lump of json on a database row

    - by Ieyasu Sawada
    I have this project which stores product details from amazon into the database. Just to give you an idea on how big it is: [{"title":"Genetic Engineering (Opposing Viewpoints)","short_title":"Genetic Engineering ...","brand":"","condition":"","sales_rank":"7171426","binding":"Book","item_detail_url":"http://localhost/wordpress/product/?asin=0737705124","node_list":"Books > Science & Math > Biological Sciences > Biotechnology","node_category":"Books","subcat":"","model_number":"","item_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=128","details_url":"http://localhost/wordpress/product/?asin=0737705124","large_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/large-notfound.png","medium_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/medium-notfound.png","small_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/small-notfound.png","thumbnail_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/thumbnail-notfound.png","tiny_img":"http://localhost/wordpress/wp-content/plugins/ecom/img/tiny-notfound.png","swatch_img":"http://localhost/wordpress/wp-content/plugins/ecom/img/swatch-notfound.png","total_images":"6","amount":"33.70","currency":"$","long_currency":"USD","price":"$33.70","price_type":"List Price","show_price_type":"0","stars_url":"","product_review":"","rating":"","yellow_star_class":"","white_star_class":"","rating_text":" of 5","reviews_url":"","review_label":"","reviews_label":"Read all ","review_count":"","create_review_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=132","create_review_label":"Write a review","buy_url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=19186","add_to_cart_action":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/add_to_cart.php","asin":"0737705124","status":"Only 7 left in stock.","snippet_condition":"in_stock","status_class":"ninstck","customer_images":["http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/31FIM-YIUrL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg","http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg"],"disclaimer":"","item_attributes":[{"attr":"Author","value":"Greenhaven Press"},{"attr":"Binding","value":"Hardcover"},{"attr":"EAN","value":"9780737705126"},{"attr":"Edition","value":"1"},{"attr":"ISBN","value":"0737705124"},{"attr":"Label","value":"Greenhaven Press"},{"attr":"Manufacturer","value":"Greenhaven Press"},{"attr":"NumberOfItems","value":"1"},{"attr":"NumberOfPages","value":"224"},{"attr":"ProductGroup","value":"Book"},{"attr":"ProductTypeName","value":"ABIS_BOOK"},{"attr":"PublicationDate","value":"2000-06"},{"attr":"Publisher","value":"Greenhaven Press"},{"attr":"SKU","value":"G0737705124I2N00"},{"attr":"Studio","value":"Greenhaven Press"},{"attr":"Title","value":"Genetic Engineering (Opposing Viewpoints)"}],"customer_review_url":"http://localhost/wordpress/wp-content/ecom-customer-reviews/0737705124.html","flickr_results":["http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/5105560852_06c7d06f14_m.jpg"],"freebase_text":"No around the web data available yet","freebase_image":"http://localhost/wordpress/wp-content/plugins/ecom/img/freebase-notfound.jpg","ebay_related_items":[{"title":"Genetic Engineering (Introducing Issues With Opposing Viewpoints), , Good Book","image":"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg","url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=12165","currency_id":"$","current_price":"26.2"},{"title":"Genetic Engineering Opposing Viewpoints by DAVID BENDER - 1964 Hardcover","image":"http://localhost/wordpress/wp-content/uploads/2013/10/ecom_images/140.jpg","url":"http://localhost/wordpress/wp-content/ecom-plugin-redirects/ecom_redirector.php?id=130","currency_id":"AUD","current_price":"11.99"}],"no_follow":"rel=\"nofollow\"","new_tab":"target=\"_blank\"","related_products":[],"super_saver_shipping":"","shipping_availability":"","total_offers":"7","added_to_cart":""}] So the structure for the table is: asin title details (the product details in json) Will the performance suffer if I have to store like 10,000 products? Is there any other way of doing this? I'm thinking of the following, but the current setup is really the most convenient one since I also have to use the data on the client side: store the product details in a file. So something like ASIN123.json store the product details in one big file. (I'm guessing it will be a drag to extract data from this file) store each of the fields in the details in its own table field Thanks in advance!

    Read the article

  • New dates -Partner Sales tranings in the Nordics.

    - by ann-kristin.hahne(at)oracle.com
    Finland/Espoo · ti 01.02.2011 klo 9-11 · ti 01.03.2011 klo 9-11· ti 05.04.2011 klo 9-11 · ti 03.05.2011 klo 9-11 Norway/Lysaker 8/2 Oracle 11-13.30 5/4 Oracle 11-13.30 3/5 Oracle 11-13.30 Sweden/Stockholm, Lunda 8/2     kl: 09:00-11:00 8/3     Halvdags Oracle utbildning 5/4     kl: 09:00-11:00 3/5     kl: 09:00-11:00   Register at: DKFINOSE Erik Vedel, Tech Data Azlan - Product ManagerPeter Ekström, Tech Data Azlan - Product ManagerJermund Ottermo, Tech Data Azlan - Product ManagerSara Lavandler, Tech Data Azlan - Product Manager +45 2093 7575+358 (0)201 553 638+47 22 89 72 43+46 (0)8 795 2000

    Read the article

  • Oracle OpenWorld Countdown Begins

    - by Michelle Kimihira
    Oracle OpenWorld is a little over 3 weeks away and it is bigger than ever!  We are very excited to meet with you and share our exciting innovations around Oracle Fusion Middleware. To help you navigate, there will be a series of blogs to help you make the most out of the event. Thomas Kurian, Executive Vice President, Product Development will be delivering his keynote, “The Oracle Cloud: Oracle’s Cloud Platform and Applications Strategy” on Tuesday, October 2 at 8:00 AM – 9:45 AM in Moscone North, Hall D. Be sure to attend this session and gain insight on how Oracle’s complete suite of cloud applications are transforming how customers manage their businesses. Here are the top 5 Oracle Fusion Middleware General Sessions you don’t want to miss: Monday, 10/1 10:45 AM – 11:45 AM GEN9504 - General Session: Innovation Platform for Oracle Apps, Including Fusion Applications Amit Zavery, Vice President, Fusion Middleware Product Management Moscone West, 3002/3004 Monday, 10/1 1:45PM – 2:45 PM GEN11554 – General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West, 3002/3004 Monday, 10/1 4:45 PM – 5:45 PM GEN11422 – General Session: Building and Managing a Private Oracle Java and Middleware Cloud Moscone West, 3014 Tuesday, 10/2 10:15 AM – 11:15 AM GEN9394 - General Session: Oracle Fusion Middleware Strategies Driving Business Innovation Hassan Rizvi, Executive Vice President of Product Development Moscone North, Hall D Tuesday, 10/2 11:45 AM – 12:45AM CON9162 – Oracle Fusion Middleware: Meet This Year’s Most Impressive Customer Projects Moscone West, 3001 Here is what else you can expect to see on the Oracle Fusion Middleware Blog leading up to Oracle OpenWorld 2012. §  Week of 10-14 September: Best of Oracle Fusion Middleware and Oracle Fusion Middleware for Enterprise Applications §  Week of 17-21 September: What to expect in Hassan Rizvi’s (Executive Vice President of Product Development) and Amit Zavery’s (Vice President of Product Management) sessions §  Week of 24-28 September: All Things Mobile and Fusion Middleware Lineup

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >