Search Results

Search found 13889 results on 556 pages for 'results'.

Page 385/556 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • Visual Studio 2010 plus Help Index : have your cake and eat it too

    - by Adrian Hara
    Although the team's intentions might have been good, the new help system in Visual Studio 2010  is a huge step backwards (more like a cannonball-shot-kind-of-leap really) from the one we all know (and love?) in Visual Studio 2008 and 2005 (and heck, even VS6). Its biggest problem, from my point of view, is the total and complete lack of the Help Index feature: you know...the thing where you just go and type in what you're looking for and it filters down the list of results automatically. For me this was the number one productivity feature in the "old" help system, allowing me to find stuff very quickly. Number two is that it's entirely web based and runs, by default, in the browser. So imagine, when you press F1, a new tab opens in your default browser pointing to the help entry. While this is wrong in many ways, it's also extremely annoying, cleaning up tabs in the browser becomes a chore which represents a serious productivity hit. These and many other problems were discussed extensively (and rather vocally) on connect but it seems MS seemed to ignore it and opt to release the new help system anyway, with the promise that more features will be added in a later release. Again, it kind of amazes me that they chose to ship a product with LESS features that the previous one and, what's worse, missing KEY features, just so it's "standards based" and "extensible". To be honest, I couldn't care less about the help system's implementation, I just want it to be usable and I would've thought that by now the software community and especially MS would've learned this lesson. In the end, what kind of saddens me is that MS regards these basic features as ones for the "power help user". I mean, come on! I mean a) it's not like my aunt's using Visual Studio 2010 and she represents the regular user, b) all software developers are, by definition, power users and c) it's a freakin help, not rocket science! As you can tell, I'm pretty pissed. Even more so because I really feel that the VS2010 & co. release really is a great one, with a lot of effort going into the various platforms and frameworks, most (if not all) of them being really REALLY good products. And then they go and screw up the help! How lame is that?!   Anyway, it's not all gloom-and-doom. Luckily there is a desktop app which presents a UI over the new help system that's very close to what was there in VS2008, by Robert Chandler (to which I hereby declare eternal gratitude). It still has some minor issues but I'll take it over the browser version of the help any day. It's free, pretty quick (on my machine ;)) and nicely usable. So, if you hate the new help system (passionately) like I do, download H3Viewer now.

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • It's intellisense for SQL Server

    - by Nick Harrison
    It's intellisense for SQL Server Anyone who has ever worked with me, heard me speak, or read any of writings knows that I am a HUGE fan of Reflector.    By extension,  I am a big fan of Red - Gate   I have recently begun exploring some of their other offerings and came across this jewel. SQL Prompt is a plug in for Visual Studio and SQL Server Management Studio.    It provides several tools to make dealing with SQL a little easier for your friendly neighborhood developer. When you a query window in a database, the plugin kicks in and gathers the metadata for the database that you are in.    As you type a query, you get handy feedback like a list of tables after you type select.    You can select one of the tables, specify * and then tab to expand the select clause to include all of the columns from the selected table.    As you are building up the where clause, you are prompted by the names of columns in the selected tables. If you spend any time writing ad hoc queries or building stored procedures by hand, this can save you substantial time. If you are learning a new data model, this can greatly cut down on your frustration level. The other really cool thing here is Format SQL.   I have searched all over the place for a really good SQL formatter.    Badly formatted  SQL is so much harder to read than well formatted SQL.   Unfortunately, management studio offers no support for keeping your SQL well formatted.    There are many tools available to format your SQL.   Some work better than others.    Some don't work that well at all.   Most will give you some measure of control over how the formatted SQL looks.    SQL Prompt produces good results and is easy to configure. Sadly no tool is perfect, and what would we be without a wish list.    There are some features that I would like to see: Make it easier to paste SQL in and out of code.    Strip off string builder, etc Automate replacing hard coded values with bind variables or parameters In addition to reformatting SQL, which is a huge refactor, support for other SQL refactors would be nice.    Convert join to sub query and vice versa come to mind Wish list a side, this is a wonderful tool that easily saves me an hour or more on most weeks.

    Read the article

  • How valuable are you to your organization?

    - by Lance Shaw
    I don't know about you but I find it easy to get bogged down with the daily list of tasks and deliverables.  We all have lots to do and it all seems to be due tomorrow.  If you are reading this blog, than your to-do list is almost certainly filled with tasks related to the management, processing and publishing of information.  As we get mired in the daily routine of making sure that the content management needs of the organizations are met, we can easily lose sight of the value that we bring.  After all, if information and content is the lifeblood of our organizations, then surely maintaining the healthy flow of that information has real value.  But how can you measure that value and bring it forward on your résumé or your list of achievements in time for your next performance review? The AIIM organization has spent a lot of time recently researching the value of certification for "information professionals".  When it comes to enterprise content management (ECM) there are many areas of specialization including records management, content archivist, digital asset manager, content librarian and more.  Specialization can clearly drive up your value but it can also lock you into a narrow niche area of focus.  AIIM has found that what companies also need is someone that can apply their knowledge of how information is managed within the operational scope of the business in order to drive real, measurable strategic value.  When you can showcase the value of a broader, business-wide mindset to your management, you have more opportunity to make professional progress and drive real growth where it counts, your paycheck.   We here on the Oracle WebCenter team partnered with AIIM on the research they performed around the value of an information professional certification program. In a webinar this week, Doug Miles of AIIM and I will be talking about the results of that recent survey and what it is going to mean in the future to be recognized as a "Certified Information Professional" (CIP).  Oracle sponsored this research to help individuals and companies understand the value of enterprise content management and what it means across the entire organization. I hope you will join us. If any of us were stopped in the street and were asked about it, I bet most of us would think of ourselves as an "Information Professional".  Now we have a way to actually prove it!  There's only one downside that I can see...  you will have to get your business cards updated to include the "CIP" acronym after your name.  I think you will agree that is a price worth paying!

    Read the article

  • Windows 7 - traceroute hop with high latency! [closed]

    - by Mac
    I've been experiencing this problem for quite a while, and it's quite frustrating. I'll do a traceroute, to www.l.google.com, for example. This is the result (please note: I will replace some parts of personal information with text - i.e. ISP.IP is in reality an actual IP address, and ISPNAME replaces the actual ISP name): Tracing route to www.l.google.com [173.194.34.212] over a maximum of 30 hops: 1 1 ms 1 ms <1 ms 192.168.1.1 2 9 ms 8 ms 10 ms ISP.EXCHANGE.NAME [ISP.IP.172.205] 3 161 ms 171 ms 177 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 4 12 ms 9 ms 10 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 5 10 ms 9 ms 17 ms host-ISP.IP.224.165.ISPNAME.net [ISP.IP.224.165] 6 10 ms 9 ms 10 ms 10.42.0.3 7 9 ms 9 ms 10 ms host-ISP.IP.202.129.ISPNAME.net [ISP.IP.202.129] 8 10 ms 9 ms 9 ms host-ISP.IP.209.33.ISPNAME.net [ISP.IP.209.33] 9 77 ms 129 ms 164 ms host-ISP.IP.198.162.ISPNAME.net [ISP.IP.198.162] 10 43 ms 42 ms 43 ms 72.14.212.13 11 42 ms 42 ms 42 ms 209.85.252.36 12 59 ms 59 ms 59 ms 209.85.241.210 13 60 ms 76 ms 68 ms 72.14.237.124 14 59 ms 59 ms 58 ms mad01s08-in-f20.1e100.net [173.194.34.212] Trace complete. Notice that there is a spike on the 3rd hop, but also notice that the 3rd and 4th hop are to the exact same destination. Furthermore, when I ping the offended hop separately, I get the low latency I would expect to that server: Pinging ISP.IP.215.246 with 32 bytes of data: Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=12ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Ping statistics for ISP.IP.215.246: Packets: Sent = 10, Received = 10, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 9ms, Maximum = 12ms, Average = 9ms I'm baffled as to why or how this is happening, and it seems to "fix itself" at random times. Here is an example of where it was working as expected: http://i.imgur.com/bysno.png Notice how many fewer hops were taken. Please note that all the posted results occurred within 10 minutes of testing. I've tried contacting my ISP, and they seem clueless; in their eyes, as long as "the download speed is not slow", then they're doing everything right. Any insight would be very much appreciated, and thanks in advanced!

    Read the article

  • Why is there nobody talking about an alternative to HTML & CSS? [closed]

    - by Nic
    HTML is such an old and cumbersome language, which was intended just to markup text. Today it's very rare to see a static HTML website, or a site with only text or a very simple layout. As a web developer I find it inconvenient to use HTML & CSS, very repetitive and cumbersome. I think that for a lot of website it could be simplified a lot. Tim Berners-Lee (W3) wrote a document named "The World Wide Web: Past, Present and Future" in August 1996 ... though HTML will be considered part of the established infrastructure (rather than an exciting new toy), there will always be new formats coming along, and it may be that a more powerful and perhaps a more consistent set of formats will eventually displace HTML. So, more than 15 years later, HTML is still here and it's here to stay. Why? Why searching for xml alternatives brings so much relevant result, but searching for html alternatives brings almost none relevant results? Answers like "it's too hard to change a standard" aren't answering the question since a lot of new standards emerged since the initiation of the web. I'm also not searching for answers that suggest using tools to simplify the process or formats that anyhow depends on HTML or CSS, technologies that currently require a plugin and not even trying to become an open standards (like Flash) aren't an answer neither. BTW, here are 2 articles written more than two years ago as food for thought, it might help with writing a better answers. "HTML, CSS, and Web Development Practices: Past, Present, and Future" describing a very related problem, by Jens O. Meiert. "A Brief History of HTML" by Scott Reynen, Here is a quote from the end: So now you can answer questions about HTML5 without even looking at the draft, which is handy, because the draft is 400+ pages long. Why is there a new tag in HTML5? Because some browser vendor (maybe the one that also owns a large video site) wanted it. Why are there so many scriptable interface elements in HTML5? Because some browser vendor (maybe the one selling phones without Flash support) wants them. Why is there no support for RDFa in HTML5? Apparently no browser vendor wanted it. Is that the future?

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • The View-Matrix and Alternative Calculations

    - by P. Avery
    I'm working on a radiosity processor in DirectX 9. The process requires that the camera be placed at the center of a mesh face and a 'screenshot' be taken facing 5 different directions...forward...up...down...left...right... ...The problem is that when the mesh face is facing up( look vector: 0, 1, 0 )...a view matrix cannot be determined using standard trigonometry functions: Matrix4 LookAt( Vector3 eye, Vector3 target, Vector3 up ) { // The "look-at" vector. Vector3 zaxis = normal(target - eye); // The "right" vector. Vector3 xaxis = normal(cross(up, zaxis)); // The "up" vector. Vector3 yaxis = cross(zaxis, xaxis); // Create a 4x4 orientation matrix from the right, up, and at vectors Matrix4 orientation = { xaxis.x, yaxis.x, zaxis.x, 0, xaxis.y, yaxis.y, zaxis.y, 0, xaxis.z, yaxis.z, zaxis.z, 0, 0, 0, 0, 1 }; // Create a 4x4 translation matrix by negating the eye position. Matrix4 translation = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -eye.x, -eye.y, -eye.z, 1 }; // Combine the orientation and translation to compute the view matrix return ( translation * orientation ); } The above function comes from http://3dgep.com/?p=1700... ...Is there a mathematical approach to this problem? Edit: A problem occurs when setting the view matrix to up or down directions, here is an example of the problem when facing down: D3DXVECTOR4 vPos( 3, 3, 3, 1 ), vEye( 1.5, 3, 3, 1 ), vLook( 0, -1, 0, 1 ), vRight( 1, 0, 0, 1 ), vUp( 0, 0, 1, 1 ); D3DXMATRIX mV, mP; D3DXMatrixPerspectiveFovLH( &mP, D3DX_PI / 2, 1, 0.5f, 2000.0f ); D3DXMatrixIdentity( &mV ); memcpy( ( void* )&mV._11, ( void* )&vRight, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._21, ( void* )&vUp, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._31, ( void* )&vLook, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._41, ( void* )&(-vEye), sizeof( D3DXVECTOR3 ) ); D3DXVec4Transform( &vPos, &vPos, &( mV * mP ) ); Results: vPos = D3DXVECTOR3( 1.5, -6, -0.5, 0 ) - this vertex is not properly processed by shader as the homogenous w value is 0 it cannot be normalized to a position within device space...

    Read the article

  • Ubuntu boots to terminal on start up

    - by Jules
    For a long time I've been unable to get updates due to a "repositories not found" error. Yesterday someone fixed this for me but after installing 94 days worth of updates my system wanted to restart. It looks like it is booting normally but then it opens a terminal and asks for my login and password. I had tried Ctrl+ Alt +F7 and startx to no avail. Here is everything that appears on screen when I turn the computer on. Ubuntu 10.04.4 LTS box-o-doom tty1 box-o-doom login:julian password: last login: Sun Jul 8 10:28:02 BST tty1 Linux box-o-doom 2.6.32-41-generic-pae #91-Ubuntu SMP Wed Jun 13 12:00:09 UTC 20 12 i686 GNU/Linux Ubuntu 10.04.4 LTS Welcome to Ubuntu! *Documentation: http://help.ubuntu.com julian@box-o-doom:~$_ i then tried dmesg which produced hundreds of lines all very similar to the first line reproduced here [ 9.453119] type=1505 audit1341742405.022:10): operation="profile_replace" pid=743 name="/usr/lib/connman/scripts/dhclient-script" follwed by this at the end [ 9.475880] alloc irq_desc for 27 on node-1 [ 9.475883] alloc kstat_irqs on node-1 [ 9.475890]forcedeth 0000:00:07.0: irq27 for MSI/MSI-X [ 9.760031] hda_code:ALC662 rev1: BIOS auto-probing. [ 10.048095] input:HDA Digital PCBeep as /devices/pci 0000:00:05.o/inp ut/input6 [ 10.862278] ppdev: user-space parallel port driver [ 20.268018] eth0: no IPv6 routers present julian@box-o-doom:~$_ results of startx lots of text scrolls off the screen and i have no way of reading it. but everything i can see is reproduced below current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version Markers: (--) probed, (**) from config file, (==) defult setting, (++) from command line, (!!) notice, (II) informational. (WW) Warning, (EE) error, (NI) not implemented, (??) unknown. (==) log file: "/var/log/Xorg.0.log", Time: SUn Jul 8 12:02:23 2012 (==) using config file: "/etc/X11/xorg.conf" (==)using config directory: "/usr/lib/X11/xorg.conf.d" FATAL: Module nvidia not found. (EE) NVIDIA: Failed to load the NVIDIA kernal module please check your (EE) NVIDIA: systems kernal log for aditional error messages. (EE) Failed to load module "nvidia" (module specific error, 0) (EE) No drivers available. Fatal server error: no screens found please consult the X.org foundation support at http://wiki.x.org for help please also check the log files at "/var/log/X.org.0.log" for aditional informati on ddxSigGiveUp: Closing log giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No suck process (errno 3): server error julian@box-o-doom:~$_

    Read the article

  • Retreiving upcoming calendar events from a Google Calendar

    - by brian_ritchie
    Google has a great cloud-based calendar service that is part of their Gmail product.  Besides using it as a personal calendar, you can use it to store events for display on your web site.  The calendar is accessible through Google's GData API for which they provide a C# SDK. Here's some code to retrieve the upcoming entries from the calendar:  .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public class CalendarEvent 2: { 3: public string Title { get; set; } 4: public DateTime StartTime { get; set; } 5: } 6:   7: public class CalendarHelper 8: { 9: public static CalendarEvent[] GetUpcomingCalendarEvents 10: (int numberofEvents) 11: { 12: CalendarService service = new CalendarService("youraccount"); 13: EventQuery query = new EventQuery(); 14: query.Uri = new Uri( 15: "http://www.google.com/calendar/feeds/userid/public/full"); 16: query.FutureEvents = true; 17: query.SingleEvents = true; 18: query.SortOrder = CalendarSortOrder.ascending; 19: query.NumberToRetrieve = numberofEvents; 20: query.ExtraParameters = "orderby=starttime"; 21: var events = service.Query(query); 22: return (from e in events.Entries select new CalendarEvent() 23: { StartTime=(e as EventEntry).Times[0].StartTime, 24: Title = e.Title.Text }).ToArray(); 25: } 26: } There are a few special "tricks" to make this work: "SingleEvents" flag will flatten out reoccurring events "FutureEvents", "SortOrder", and the "orderby" parameters will get the upcoming events. "NumberToRetrieve" will limit the amount coming back  I then using Linq to Objects to put the results into my own DTO for use by my model.  It is always a good idea to place data into your own DTO for use within your MVC model.  This protects the rest of your code from changes to the underlying calendar source or API.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • USB drives not recognized all of a sudden (usb_storage not loaded lsmod does not report usb_storage)

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

  • Social Analytics and the Customer

    - by David Dorf
    Many successful retailers put the customer at the center of everything they do, so its important that the customer is modeled correctly across all their systems.  The path to omni-channel starts and ends with the customer so at ARTS, our next big project is focused on ensuring a consistent representation of customers across our transactional data model, datawarehouse model, and XML schemas.  Further, we've started a new whitepaper that describes how Big Data and Social Media Analytics should be leveraged by retailers to add and additional level of customer insight. Let's start by taking a closer look at the meaning of social analytics.  Here's my definition: Social Analytics, in the retail context, describes the analysis of data obtained from social media sources in an effort to better comprehend and interact with the community of consumers.  This discipline seeks to understand what’s being said by the community about brands and products (“monitoring”), as well as understand the behaviors of those in the community (“profiling”).  The results are used to enforce the brand image, improve product decisions, and better focus marketing, all of which lead to increased sales. To help illustrate the facets of social analytics, I drew the diagram below which was originally published by Retail Touchpoints. There are lots of tools on the market that allow retailers to monitor social media for brand and product mentions.  These include analysis of sentiment, reach, share of voice, engagement, etc.  When your brand is mentioned, good or bad, its an opportunity to engage with the customer and possibly lead to a sale.  Because products are not always unique, its much more difficult to monitor product mentions, but detecting product trends early can help a retailer make better merchandising decisions, especially in fashion. Once a retailer understands what's being said, the next step is learn more about who's saying it.  That involves profiling customers beyond simple demographics to understand their motivations.  Much can be learned from patterns, and even more when customers voluntarily share their data.  Knowing that a customer is passionate about, for example, mountain biking allows the retailer to make relevant offers on helmets, ask for opinions on hydration, and help spread marketing messages. Social analytics has many facets that benefit retailers, some of which are easy but many of which are hard.  Its important for the CMO and CIO to work closely together to plan for these capabilities and monitor the maturity of tools on the market.  This is an area that will separate winners from losers.

    Read the article

  • JCP EC Nomination Materials for 2012

    - by heathervc
    The nomination period of the 2012 Annual JCP EC Elections will begin at the end of September 2012.  The JCP will be accept self-nominations for 2 seats on what will become the merged JCP EC, starting 28 September, with the nomination period ending on Thursday, 11 October. JCP Members (JSPA 2 primary contacts) will receive messages with instructions for nominating and their login credentials via email.  You will need this credential information to login and complete the nomination.The JCP EC Special Election schedule is posted online in the JCP calendar, highlights are below:Nominations for elected seats: 28 September-11 OctoberBallot (ratified and elected): 16-29 OctoberNew members take office: 13 November The ballot with nominees for ratified and nominated seats begins on 16 October. The results will also be available on jcp.org on 30 October. If you are attending JavaOne 2012 in San Francisco, there are several events happening that you may be interested in attending, in particular the following BOF session.Meet the JCP Executive Committee CandidatesSession ID: BOF6307Location: Hilton San Francisco - Golden Gate 3/4/5Date and Time: 10/2/12, 4:30 PM - 5:15 PM We will also be hosting a call for all of the candidates following the nomination period.  The following information is required for self-nomination.1) Contact information/Biography Each EC seat is represented by two people - a primary and alternate representative. Provide the following information for each representative: - Name - Title - Email Address - Mailing Address - Phone Number - Fax - A brief biography (3-5 sentences/~100-200 words) for primary contact - Photograph (prefer jpg format, head only shot) for primary contactBios and photos for the EC members are posted here:http://jcp.org/en/press/news/ec-feature_MEhttp://jcp.org/en/press/news/ec-feature_SE2) Qualification StatementA brief (2-3 paragraph) description of your qualifications for an EC seat; this is a Qualification Statement for the organization you represent. It should include the value and perspective you bring to the EC, your interests in the JCP program, as well as a summary of your current participation or planned participation in the JCP program (your entire organization)--JSRs, participation on Expert Groups, meetings/events attended, etc.  This statement will appear on the ballot and will convince community members that they should vote for you, so please include relevant information about your experience within the JCP program and your investments in Java technology.A few sample qualification statements are available here.3) Position PaperOne of the pieces of information we make available to the JCP membership for voting purposes is a position paper.  If you would like to provide this type of information for the ballot, please prepare in pdf format for posting.  This would be more detail on areas that you would put focus into during your tenure on the JCP EC.You can read more about some of the topics under discussion in the EC here, including links to JCP.Next materials. If you have an interest in participating in the JCP EC, please start preparing these materials now.  We look forward to a successful election process.

    Read the article

  • apt-get install is not able to access /etc

    - by HorusKol
    I put together an ubuntu 12.04 server a couple of weeks ago and everything seemed fine until this morning. Suddenly, I'm having trouble installing new packages - at first I thought there was something wrong with tinyproxy and so I tried installing squid instead. However, I get similar results: Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf".\ ... /var/lib/dpkg/info/squid3.postinst: 1: /var/lib/dpkg/info/squid3.postinst: cannot open /etc/squid3/squid.conf: No such file It seems that apt-get is not creating the configuration files needed for these programs. I haven't modified any configuration or user groups since the last successful update/install of packages. /etc is present, and is populated with a nice healthy tree of configuration files. It is owned and grouped to root, and has the properties drwxr-xr-x - all the files and folders inside seem to be fine to, as far as I can tell. I've even been able to edit/save a couple as sudo. Full output from installing tinyproxy: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: tinyproxy 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/61.6 kB of archives. After this operation, 201 kB of additional disk space will be used. Selecting previously unselected package tinyproxy. (Reading database ... 58916 files and directories currently installed.) Unpacking tinyproxy (from .../tinyproxy_1.8.3-1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up tinyproxy (1.8.3-1) ... Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf". invoke-rc.d: initscript tinyproxy, action "start" failed. dpkg: error processing tinyproxy (--configure): subprocess installed post-installation script returned error exit status 70 Errors were encountered while processing: tinyproxy E: Sub-process /usr/bin/dpkg returned an error code (1) Result of strace after installation: 18467 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 18467 open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 18467 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\30\2\0\0\0\0\0"..., 832) = 832 18467 open("/etc/tinyproxy.conf", O_RDONLY) = -1 ENOENT (No such file or directory)

    Read the article

  • Adding a Role to a Responsibility for Use with the Oracle E-Business Suite SDK for Java JAAS Implementation

    - by Juan Camilo Ruiz
    This new post on the series of ADF integration with Oracle E-Business Suite, was written by Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team. Based on a previous post of the series, a reader asked what to do if you have an existing responsibility assigned to lots of users, instead of the UMX role that the Oracle E-Business Suite SDK for Java JAAS Implementation requires.  It would be tedious to assign a new role directly to hundreds or thousands of users, so naturally we’d like to avoid that if possible. Most people don’t know this, but it’s possible to assign a UMX role to a responsibility in Oracle User Management. Once you do that, users with your responsibility will all inherit your UMX role automatically. You can then proceed with using your UMX role with JAAS for ADF. Here is how to assign a UMX role to a responsibility in Oracle E-Business Suite: In the User Management responsibility, go to the Roles & Role Inheritance page. Search for the responsibility you want. In the search results table, click the “View In Hierarchy” icon for your responsibility. Note that the codes for responsibilities start with FND_RESP, while the codes for roles start with UMX. In the Role Inheritance Hierarchy, click on the Add Node icon (green plus + ) for your responsibility. Now you will see what appears to be the same page again but it is a little different (note the text at the top telling you the role you select will be inherited…).  This time, either search or expand nodes until you find your custom UMX role.  Use the Quick Select to choose that role. You will be sent back to the first screen, where you should see a confirmation message at the top. On the same page you can verify that the custom UMX role is underneath the responsibility.  You may need to expand one or more nodes to see the UMX role under the responsibility. You might see some other roles that have been inherited as well. Now that your users have the UMX role, you can test that the UMX role is being passed through to your ADF application through the Oracle E-Business Suite SDK for Java JAAS feature. Happy coding!

    Read the article

  • Oracle Joins XBRL US To Help Drive Adoption

    - by Theresa Hickman
    Recently, Oracle joined XBRL US, the national consortium for XML business reporting standards to stay ahead of the technology and help increase XBRL adoption by U.S. companies by 2011. Large accelerated filers were mandated to use XBRL starting in 2009; other large filers started in 2010 and all other public companies must comply in June 2011. Here is a list of other organizations that recently joined XBRL US: Oracle Citi Federal Filings LLC Edgar Agents LLC XSP For those of you who have been living under a rock, XBRL stands for eXtensible Business Reporting Language. Simply put, it's reporting electronically. Just like PDFs or spreadsheets are a type of output, XBRL is another output option in electronic form. Right now, the transition to XBRL means extra work for publicly traded companies because they need to file their financial statements in both EDGAR and XBRL formats. Once the SEC phases out the EDGAR system, XBRL will be the primary way to deliver financial information with footnotes and supporting schedules to multiple audiences without having to re-key or reformat the information. A single XBRL document can be converted to printed output, published via the Web, fed into an SEC database (e.g. EDGAR) or forwarded to a creditor for analysis. Question: How does Oracle support XBRL reporting? Answer: The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which is part of Oracle's Hyperion Financial Close Suite along with Hyperion Financial Management, Hyperion Financial Data Quality Management and Hyperion Financial Close Management. Hyperion Disclosure Management supports the authoring of financial filings in Microsoft Office, with "hot links" to reports and data stored in Hyperion Financial Management or Oracle Essbase. It supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management. For more information and some interesting background on XBRL, I recommend reading What You Need To Know About XBRL written by our EPM expert, John O'Rourke.

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • How to sort a ListView control by a column in Visual C#

    - by bconlon
    Microsoft provide an article of the same name (previously published as Q319401) and it shows a nice class 'ListViewColumnSorter ' for sorting a standard ListView when the user clicks the column header. This is very useful for String values, however for Numeric or DateTime data it gives odd results. E.g. 100 would come before 99 in an ascending sort as the string compare sees 1 < 9. So my challenge was to allow other types to be sorted. This turned out to be fairly simple as I just needed to create an inner class in ListViewColumnSorter which extends the .Net CaseInsensitiveComparer class, and then use this as the ObjectCompare member's type. Note: Ideally we would be able to use IComparer as the member's type, but the Compare method is not virtual in CaseInsensitiveComparer , so we have to create an exact type: public class ListViewColumnSorter : IComparer {     private CaseInsensitiveComparer ObjectCompare;     private MyComparer ObjectCompare;     ... rest of Microsofts class implementation... } Here is my private inner comparer class, note the 'new int Compare' as Compare is not virtual, and also note we pass the values to the base compare as the correct type (e.g. Decimal, DateTime) so they compare correctly: private class MyComparer : CaseInsensitiveComparer {     public new int Compare(object x, object y)     {         try         {             string s1 = x.ToString();             string s2 = y.ToString();               // check for a numeric column             decimal n1, n2 = 0;             if (Decimal.TryParse(s1, out n1) && Decimal.TryParse(s2, out n2))                 return base.Compare(n1, n2);             else             {                 // check for a date column                 DateTime d1, d2;                 if (DateTime.TryParse(s1, out d1) && DateTime.TryParse(s2, out d2))                     return base.Compare(d1, d2);             }         }         catch (ArgumentException) { }           // just use base string compare         return base.Compare(x, y);     } } You could extend this for other types, even custom classes as long as they support ICompare. Microsoft also have another article How to: Sort a GridView Column When a Header Is Clicked that shows this for WPF, which looks conceptually very similar. I need to test it out to see if it handles non-string types. #

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Am 10.02. startet WebCast-Serie für Java Entwickler und WebLogic Interessenten: WebLogic Developer - Get the latest on Oracle WebLogic Server and Java EE 6

    - by Thomas Leopold
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Accelerate Your Development with Oracle WebLogic Suite Many organisations are reducing travel, conference, and training budgets for their developers without any change to the results expected of those developers. So how can you keep up with the latest developments?By receiving training, delivered free of charge, at your desk!Join us during February and March for a series of online events designed and run by the development team at Oracle. Learn how Oracle WebLogic Suite enables a whole new level of productivity for enterprise developers.Virtual Developer Day - 10th FebruaryStarting with our Virtual Developer Day on 10th February, join us for a blend of hands-on labs, live chat and presentations covering the latest on WebLogic, Java EE 6 and the programming tenets that have made it a true platform breakthrough.Weekly WebLogic Webcasts from 17th February to 17th MarchAfterwards, join us every week from 17th February to 17th March for our weekly one-hour webcasts where we will show you how to build an application from the ground up using Java and JEE technologies. Presented by the engineering team for WebLogic, these webcasts will be of great value to developers and architects, not just those already using WebLogic.For registration, full session abstracts and schedule please click here. Don't miss out! Register now to join our virtual events and keep up with all the latest developments. Find out more and register now Copyright © 2011, Oracle Corporation and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >