Search Results

Search found 13889 results on 556 pages for 'results'.

Page 501/556 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • BizTalk 2009 - Custom Functoid Wizard

    - by StuartBrierley
    When creating BizTalk maps you may find that there are times when you need perform tasks that the standard functoids do not cover.  At other times you may find yourself reapeating a pattern of standard functoids over and over again, adding visual complexity to an otherwise simple process.  In these cases you may find it preferable to create your own custom functoids.  In the past I have created a number of custom functoids from scratch, but recently I decided to try out the Custom Functoid Wizard for BizTalk 2009. After downloading and installing the wizard you should start Visual Studio and select to create a new BizTalk Server Functoid Project. Following the splash screen you will be presented with the General Properties screen, where you can set the classname, namespace, assembly name and strong name key file. The next screen is the first set of properties for the functoid.  First of all is the fuctoid ID; this must be a value above 6000. You should also then set the name, tooltip and description of the functoid.  The name will appear in the visual studio toolbox and the tooltip on hover over in the toolbox.  The descrition will be shown when you configure the functoid inputs when using it in a map; as such it should provide a decent level of information to allow the functoid to be used. Next you must set the category, exception mesage, icon and implementation language.  The category will affect the positioning of the functoid within the toolbox and also some of the behaviours of the functoid. We must then define the parameters and connections for our new functoid.  Here you can define the names and types of your input parameters along with the minimum and maximum number of input connections.  You will also need to define the types of connections accepted and the output type of the functoid. Finally you can click finish and your custom functoid project will be created. The results of this process can be seen in the solution explorer, where you will see that a project, functoid class file and a resource file have been created for you. If you open the class file you will see that the following code has been created for you: The "base" function sets all the properties that you previsouly detailed in the custom functoid wizard.  public TestFunctoids():base()  {    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID;    // Set Resource strings, bitmaps    SetupResourceAssembly(ResourceName, Assembly.GetExecutingAssembly());    SetName("FunctoidName");                     SetTooltip("FunctoidToolTip");    SetDescription("FunctoidDescription");    SetBitmap("FunctoidBitmap");    // Minimum and maximum parameters that the functoid accepts    this.SetMinParams(2);    this.SetMaxParams(2);    /// Function name that needs to be called when this Functoid is invoked.    /// Put this in GAC.    SetExternalFunctionName(GetType().Assembly.FullName,     "MyCompany.BizTalk.Functoids.TestFuntoids.TestFunctoids", "Execute");    // Category for this functoid.    this.Category = FunctoidCategory.String;    // Input and output Connection type    this.OutputConnectionType = ConnectionType.AllExceptRecord;    AddInputConnectionType(ConnectionType.AllExceptRecord);   } The "Execute" function provides a skeleton function that contains the code to be executed by your new functoid.  The inputs and outputs should match those you defined in the Custom Functoid Wizard.   public System.Int32 Execute(System.Int32 Cool)   {    ResourceManager resmgr = new ResourceManager(ResourceName, Assembly.GetExecutingAssembly());    try    {     // TODO: Implement Functoid Logic    }    catch (Exception e)    {     throw new Exception(resmgr.GetString("FunctoidException"), e);    }   } Opening the resource file you will see some of the various string values that you defined in the Custom Functoid Wizard - Name, Tooltip, Description and Exception. You can also select to look at the image resources.  This will display the embedded icon image for the functoid.  To change this right click the icon and select "Import from File". Once you have completed the skeleton code you can then look at trying out your functoid. To do this you will need to build the project, copy the compiled DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then refresh the toolbox in visual studio.

    Read the article

  • Common reasons for the &lsquo;Sys is undefined&rsquo; error in ASP.NET Ajax applications

      In this blog I will try to summarize the most common reasons for getting the famous 'Sys is undefined' error when running an Ajax enabled web site or application (there are almost one milion results on Google for that phrase). Where does it come from? In every Ajax web pages source you will see a code like this: <script type="text/javascript"> //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ScriptManager1', document.getElementById('form1')); Sys.WebForms.PageRequestManager.getInstance()._updateControls([], [], [], 90); //]]> </script>   This is the initialization script of the ScriptManager. So, if for some reason the Sys namespace is not available when the code executes you get the Sys is undefined error. Here are the most common reasons and solutions for that problem:   1. The error occurs when you have added a control from RadControls for ASP.NET AJAX, but your application is not configured to use ASP.NET AJAX. For example, in VS 2005 you created a new Blank Site instead of a new Ajax-Enabled Web Site and the Sys is undefined message pops up. To fix it you need to follow the steps described at Configuring ASP.NET Ajax article (check the topic called Adding ASP.NET AJAX Configuration Elements to an Existing Web Site) or simply create the Ajax-Enabled Web Site. You can also check my other blog post on the matter: Visual Studio 2008: Where is the new ASP.NET Ajax-Enabled Web Site template?   2. Authentication - as the website denies access to all pages to unauthorized users, access to the Telerik.Web.UI.WebResource.axd handler is unauthorized (this is the default handler of RadScriptManager). This causes the handler to serve the content of the login page instead of the combined scripts, hence the error. To solve it - add a <location> section to the application configuration file to allow access to Telerik.Web.UI.WebResource.axd to all users, like: <configuration> ... <location path="Telerik.Web.UI.WebResource.axd"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration>   Note that the access to the standard ScriptResource.axd and WebResource.axd is automatically allowed for all users (authenticated and unauthenticated), so if you use the ScriptManager instead of RadScriptManager - you will not face this problem. The authentication problem does not manifest when you disable script combining or use the CDN. Adding the above configuration section will make it work with RadScriptManagers combined script.   3. The IE6 browser fails to load the compressed script. The problem does not appear in any other browser. There is a well known bug in the older versions of IE6 which lose the first 2,048 bytes of data that are sent back from a Web server that uses HTTP compression. Latest versions of RadScriptManager does not compress the output at all if the client is IE6, but in the previous versions you need to manually disable the output compression to prevent the error. So, if you get the Sys is undefined error in IE6 - update to the latest version of RadControls or simply disable the output compression.   4. Requests to the *.axd files returns Error Code 404 - Not Found. This could  be fixed easily: Check in the IIS management console that the .axd extension (the default HTTP handler extension) is allowed:     Also check if the Verify if file exists checkbox is unchecked (click on the Edit button appearing in the previous screenshot to check). More information can be found in our troubleshooting article and from the ASP.NET QA team blog post   5. The virtual directory in IIS is not marked as Web Application. Converting it to Web Application should fix the problem.   6. Check for the <xhtmlConformance mode="Legacy"/> option in your web.config and remove it. It would be rather rare to become a victim of this exact case, but still have it in mind. Scott Guthrie describes it in more details   In the above points I mentioned several times the terms web resources, javascript output, compressed script. If you want to find out more about these please see the Web Resources Demystified series of my friend and colleague Atanas Korchev   I hope that one of the above solutions will help you get rid of the Sys is undefined error.   Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Inside BackgroundWorker

    - by João Angelo
    The BackgroundWorker is a reusable component that can be used in different contexts, but sometimes with unexpected results. If you are like me, you have mostly used background workers while doing Windows Forms development due to the flexibility they offer for running a background task. They support cancellation and give events that signal progress updates and task completion. When used in Windows Forms, these events (ProgressChanged and RunWorkerCompleted) get executed back on the UI thread where you can freely access your form controls. However, the logic of the progress changed and worker completed events being invoked in the thread that started the background worker is not something you get directly from the BackgroundWorker, but instead from the fact that you are running in the context of Windows Forms. Take the following example that illustrates the use of a worker in three different scenarios: – Console Application or Windows Service; – Windows Forms; – WPF. using System; using System.ComponentModel; using System.Threading; using System.Windows.Forms; using System.Windows.Threading; class Program { static AutoResetEvent Synch = new AutoResetEvent(false); static void Main() { var bw1 = new BackgroundWorker(); var bw2 = new BackgroundWorker(); var bw3 = new BackgroundWorker(); Console.WriteLine("DEFAULT"); var unspecializedThread = new Thread(() => { OutputCaller(1); SynchronizationContext.SetSynchronizationContext( new SynchronizationContext()); bw1.DoWork += (sender, e) => OutputWork(1); bw1.RunWorkerCompleted += (sender, e) => OutputCompleted(1); // Uses default SynchronizationContext bw1.RunWorkerAsync(); }); unspecializedThread.IsBackground = true; unspecializedThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WINDOWS FORMS"); var windowsFormsThread = new Thread(() => { OutputCaller(2); SynchronizationContext.SetSynchronizationContext( new WindowsFormsSynchronizationContext()); bw2.DoWork += (sender, e) => OutputWork(2); bw2.RunWorkerCompleted += (sender, e) => OutputCompleted(2); // Uses WindowsFormsSynchronizationContext bw2.RunWorkerAsync(); Application.Run(); }); windowsFormsThread.IsBackground = true; windowsFormsThread.SetApartmentState(ApartmentState.STA); windowsFormsThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WPF"); var wpfThread = new Thread(() => { OutputCaller(3); SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext()); bw3.DoWork += (sender, e) => OutputWork(3); bw3.RunWorkerCompleted += (sender, e) => OutputCompleted(3); // Uses DispatcherSynchronizationContext bw3.RunWorkerAsync(); Dispatcher.Run(); }); wpfThread.IsBackground = true; wpfThread.SetApartmentState(ApartmentState.STA); wpfThread.Start(); Synch.WaitOne(); } static void OutputCaller(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerAsync".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputWork(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "DoWork".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputCompleted(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerCompleted".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); Synch.Set(); } } Output: //DEFAULT //bw1.RunWorkerAsync | Thread: 3 | IsThreadPool: False //bw1.DoWork | Thread: 4 | IsThreadPool: True //bw1.RunWorkerCompleted | Thread: 5 | IsThreadPool: True //WINDOWS FORMS //bw2.RunWorkerAsync | Thread: 6 | IsThreadPool: False //bw2.DoWork | Thread: 5 | IsThreadPool: True //bw2.RunWorkerCompleted | Thread: 6 | IsThreadPool: False //WPF //bw3.RunWorkerAsync | Thread: 7 | IsThreadPool: False //bw3.DoWork | Thread: 5 | IsThreadPool: True //bw3.RunWorkerCompleted | Thread: 7 | IsThreadPool: False As you can see the output between the first and remaining scenarios is somewhat different. While in Windows Forms and WPF the worker completed event runs on the thread that called RunWorkerAsync, in the first scenario the same event runs on any thread available in the thread pool. Another scenario where you can get the first behavior, even when on Windows Forms or WPF, is if you chain the creation of background workers, that is, you create a second worker in the DoWork event handler of an already running worker. Since the DoWork executes in a thread from the pool the second worker will use the default synchronization context and the completed event will not run in the UI thread.

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • How to get distinct values from the List&lt;T&gt; with LINQ

    - by Vincent Maverick Durano
    Recently I was working with data from a generic List<T> and one of my objectives is to get the distinct values that is found in the List. Consider that we have this simple class that holds the following properties: public class Product { public string Make { get; set; } public string Model { get; set; } }   Now in the page code behind we will create a list of product by doing the following: private List<Product> GetProducts() { List<Product> products = new List<Product>(); Product p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 1"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 2"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy Note"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4s"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Sensation"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Desire"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); return products; }   And then let’s bind the products to the GridView. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts(); Gridview1.DataBind(); } }   Running the code will display something like this in the page: Now what I want is to get the distinct row values from the list. So what I did is to use the LINQ Distinct operator and unfortunately it doesn't work. In order for it work is you must use the overload method of the Distinct operator for you to get the desired results. So I’ve added this IEqualityComparer<T> class to compare values: class ProductComparer : IEqualityComparer<Product> { public bool Equals(Product x, Product y) { if (Object.ReferenceEquals(x, y)) return true; if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false; return x.Make == y.Make && x.Model == y.Model; } public int GetHashCode(Product product) { if (Object.ReferenceEquals(product, null)) return 0; int hashProductName = product.Make == null ? 0 : product.Make.GetHashCode(); int hashProductCode = product.Model.GetHashCode(); return hashProductName ^ hashProductCode; } }   After that you can then bind the GridView like this: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts().Distinct(new ProductComparer()); Gridview1.DataBind(); } }   Running the page will give you the desired output below: As you notice, it now eliminates the duplicate rows in the GridView. Now what if we only want to get the distinct values for a certain field. For example I want to get the distinct “Make” values such as Samsung, Apple, HTC, Nokia and Sony Ericsson and populate them to a DropDownList control for filtering purposes. I was hoping the the Distinct operator has an overload that can compare values based on the property value like (GetProducts().Distinct(o => o.PropertyToCompare). But unfortunately it doesn’t provide that overload so what I did as a workaround is to use the GroupBy,Select and First LINQ query operators to achieve what I want. Here’s the code to get the distinct values of a certain field. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DropDownList1.DataSource = GetProducts().GroupBy(o => o.Make).Select(o => o.First()); DropDownList1.DataTextField = "Make"; DropDownList1.DataValueField = "Model"; DropDownList1.DataBind(); } } Running the code will display the following output below:   That’s it! I hope someone find this post useful!

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Social Engagement: One Size Doesn't Fit Anyone

    - by Mike Stiles
    The key to achieving meaningful social engagement is to know who you’re talking to, know what they like, and consistently deliver that kind of material to them. Every magazine for women knows this. When you read the article titles promoted on their covers, there’s no mistaking for whom that magazine is intended. And yet, confusion still reigns at many brands as to exactly whom they want to talk to, what those people want to hear, and what kind of content they should be creating for them. In most instances, the root problem is brands want to be all things to all people. Their target audience…the world! Good luck with that. It’s 2012, the age of aggregation and custom content delivery. To cope with the modern day barrage of information, people have constructed technological filters so that content they regard as being “for them” is mostly what gets through. Even if your brand is for men and women, young and old, you may want to consider social properties that divide men from women, and young from old. Yes, a man might find something in a women’s magazine that interests him. But that doesn’t mean he’s going to subscribe to it, or buy even one issue. In fact he’ll probably never see the article he’d otherwise be interested in, because in his mind, “This isn’t for me.” It wasn’t packaged for him. News Flash: men and women are different. So it’s a tall order to craft your Facebook Page or Twitter handle to simultaneously exude the motivators for both. The Harris Interactive study “2012 Connecting and Communicating Online: State of Social Media” sheds light on the differing social behaviors and drivers. -65% of women (vs. 59% of men) stay glued to social because they don’t want to miss anything. -25% of women check social when they wake up, before they check email. Only 18% of men check social before e-mail. -95% of women surveyed belong to Facebook vs. 86% of men. -67% of women log in to Facebook once a day or more vs. 54% of men. -Conventional wisdom is Pinterest is mostly a woman-thing, right? That may be true for viewing, but not true for sharing. Men are actually more likely to share on Pinterest than women, 23% to 10%. -The sharing divide extends to YouTube. 68% of women use it mainly for consumption, as opposed to 52% of men. -Women are as likely to have a Twitter account as men, but they’re much less likely to check it often. 54% of women check it once a week compared to 2/3 of men. Obviously, there are some takeaways from this depending on your target. Women don’t want to miss out on anything, so serialized content might be a good idea, right? Promotional posts that lead to a big payoff could keep them hooked. Posts for women might be better served first thing in the morning. If sharing is your goal, maybe male-targeted content is more likely to get those desired shares. And maybe Twitter is a better place to aim your male-targeted content than Facebook. Some grocery stores started experimenting with male-only aisles. The results have been impressive. Why? Because while it’s true men were finding those same items in the store just fine before, now something has been created just for them. They have a place in the store where they belong. Each brand’s strategy and targets are going to differ. The point is…know who you’re talking to, know how they behave, know what they like, and deliver content using any number of social relationship management targeting tools that meets their expectations. If, however, you’re committed to a one-size-fits-all, “our content is for everybody” strategy (or even worse, a “this is what we want to put out and we expect everybody to love it” strategy), your content will miss the mark for more often than it hits. @mikestilesPhoto via stock.schng

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • What is Happening vs. What is Interesting

    - by Geertjan
    Devoxx 2011 was yet another confirmation that all development everywhere is either on the web or on mobile phones. Whether you looked at the conference schedule or attended sessions or talked to speakers at any point at all, it was very clear that no development whatsoever is done anymore on the desktop. In fact, that's something Tim Bray himself told me to my face at the speakers dinner. No new developments of any kind are happening on the desktop. Everyone who is currently on the desktop is working overtime to move all of their applications to the web. They're probably also creating a small subset of their application on an Android tablet, with an even smaller subset on their Android phone. Then you scratch that monolithic surface and find some interesting results. Without naming any names, I asked one of these prominent "ah, forget about the desktop" people at the Devoxx speakers dinner (and I have a witness): "Yes, the desktop is dead, but what about air traffic control, stock trading, oil analysis, risk management applications? In fact, what about any back office application that needs to be usable across all operating systems? Here there is no concern whatsoever with 100% accessibility which is, after all, the only thing that the web has over the desktop, (except when there's a network failure, of course, or when you find yourself in the 3/4 of the world where there's bandwidth problems)? There are 1000's of hidden applications out there that have processing requirements, security requirements, and the requirement that they'll be available even when the network is down or even completely unavailable. Isn't that a valid use case and aren't there 1000's of applications that fall into this so-called niche category? Are you not, in fact, confusing consumer applications, which are increasingly web-based and mobile-based, with high-end corporate applications, which typically need to do massive processing, of one kind or another, for which the web and mobile worlds are completely unsuited?" And you will not believe what the reply to the above question was. (Again, I have a witness to this discussion.) But here it is: "Yes. But those applications are not interesting. I do not want to spend any of my time or work in any way on those applications. They are boring." I'm sad to say that the leaders of the software development community, including those in the Java world, either share the above opinion or are led by it. Because they find something that is not new to be boring, they move on to what is interesting and start talking like the supposedly-boring developments don't even exist. (Kind of like a rapper pretending classical music doesn't exist.) Time and time again I find myself giving Java desktop development courses (at companies, i.e., not hobbyists, or students, but companies, i.e., the places where dollars are earned), where developers say to me: "The course you're giving about creating cross-platform, loosely coupled, and highly cohesive applications is really useful to us. Why do we never find information about this topic at conferences? Why can we never attend a session at a conference where the story about pluggable cross-platform Java is told? Why do we get the impression that we are uncool because we're not on the web and because we're not on a mobile phone, while the reason for that is because we're creating $1000,000 simulation software which has nothing to gain from being on the web or on the mobile phone?" And then I say: "Because nobody knows you exist. Because you're not submitting abstracts to conferences about your very interesting use cases. And because conferences tend to focus on what is new, which tends to be web related (especially HTML 5) or mobile related (especially Android). Because you're not taking the responsibility on yourself to tell the real stories about the real applications being developed all the time and every day. Because you yourself think your work is boring, while in fact it is fascinating. Because desktop developers are working from 9 to 5 on the desktop, in secure environments, such as banks and defense, where you can't spend time, nor have the interest in, blogging your latest tip or trick, as opposed to web developers, who tend to spend a lot of time on the web anyway and are therefore much more inclined to create buzz about the kind of work they're doing." So, next time you look at a conference program and wonder why there's no stories about large desktop development projects in the program, here's the short answer: "No one is going to put those items on the program until you start submitting those kinds of sessions. And until you start blogging. Until you start creating the buzz that the web developers have been creating around their work for the past 10 years or so. And, yes, indeed, programmers get the conference they deserve." And what about Tim Bray? Ask yourself, as Google's lead web technology evangelist, how many desktop developers do you think he talks to and, more generally, what his frame of reference is and what, clearly, he considers to be most interesting.

    Read the article

  • Why Executives Need Enterprise Project Portfolio Management: 3 Key Considerations to Drive Value Across the Organization

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} By: Guy Barlow, Oracle Primavera Industry Strategy Director Over the last few years there has been a tremendous shift – some would say tectonic in nature – that has brought project management to the forefront of executive attention. Many factors have been driving this growing awareness, most notably, the global financial crisis, heightened regulatory environments and a need to more effectively operationalize corporate strategy. Executives in India are no exception. In fact, given the phenomenal rate of progress of the country, top of mind for all executives (whether in finance, operations, IT, etc.) is the need to build capacity, ramp-up production and ensure that the right resources are in place to capture growth opportunities. This applies across all industries from asset-intensive – like oil & gas, utilities and mining – to traditional manufacturing and the public sector, including services-based sectors such as the financial, telecom and life sciences segments are also part of the mix. However, compounding matters is a complex, interplay between projects – big and small, complex and simple – as companies expand and grow both domestically and internationally. So, having a standardized, enterprise wide solution for project portfolio management is natural. Failing to do so is akin to having two ERP systems, one to manage “large” invoices and one to manage “small” invoices. It makes no sense and provides no enterprise wide visibility. Therefore, it is imperative for executives to understand the full range of their business commitments, the benefit to the company, current performance and associated course corrections if needed. Irrespective of industry and regardless of the use case (e.g., building a power plant, launching a new financial service or developing a new automobile) company leaders need to approach the value of enterprise project portfolio management via 3 critical areas: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} 1. Greater Financial Discipline – Improve financial rigor and results through better governance and control is an imperative given today’s financial uncertainty and greater investment scrutiny. For example, as India plans a US$1 trillion investment in the country’s infrastructure how do companies ensure costs are managed? How do you control cash flow? Can you easily report this to stakeholders? 2. Improved Operational Excellence – Increase efficiency and reduce costs through robust collaboration and integration. Upwards of 66% of cost variances are driven by poor supplier collaboration. As you execute initiatives do you have visibility into the performance of your supply base? How are they integrated into the broader program plan? 3. Enhanced Risk Mitigation – Manage and react to uncertainty through improved transparency and contingency planning. What happens if you’re faced with a skills shortage? How do you plan and account for geo-political or weather related events? In summary, projects are not just the delivery of a product or service to a customer inside a predetermined schedule; they often form a contractual and even moral obligation to shareholders and stakeholders alike. Hence the intimate connection between executives and projects, with the latter providing executives with the platform to demonstrate that their organization has the capabilities and competencies needed to meet and, whenever possible, exceed their customer commitments. Effectively developing and operationalizing corporate strategy is the hallmark of successful executives and enterprise project and portfolio management allows them to achieve this goal. Article was first published for Manage India, an e-newsletter, PMI India.

    Read the article

  • XNA - Use Mouse To Rotate & Arrow Keys To Scroll A Linearly Wrapped Texture:

    - by The Thing
    Using XNA I'm working on my first, relatively simple, videogame for the PC. At the moment my game window is 1024 X 768 and I have a 'Starfield' linearly wrapped background texture 1280 X 1280 in size whose origin has been set to its center point (width / 2, height / 2). This texture is drawn onscreen using (graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2) to place the origin in the center of the window. I want to be able to use the horizontal movement of the mouse to rotate my texture left or right and use the arrow keys to scroll the texture in four directions. From my own related coding experiments I have found that once I rotate the texture it no longer scrolls in the direction I want, it's as if somehow the XNA framework's 'sense of direction' has been 'rotated' along with the texture. As an example of what I've described above lets say I rotate the texture 45 degrees to the right, then pressing the up arrow key results in the texture scrolling diagonally from top-right to bottom-left. This is not what I want, regardless of the degree or direction of rotation I want my texture to scroll straight up, straight down, or to the left or right depending on which arrow key was pressed. How do I go about accomplishing this? Any help or guidance is appreciated. To finish up there are two points I'd like to clarify: [1] The reason I'm using linear wrapping on my starfield texture is that it gives a nice impression of an endless starfield. [2] Using a texture at least 1280 X 1280 in conjunction with a game window of 1024 X 768 means that at no point in it's rotation will the edges of the texture become visible. Thanks for reading..... Update # 1 - as requested by RCIX: The code below is what I was referring to earlier when I mentioned 'related coding experiments'. As you can see I am scrolling a linearly wrapped texture in the direction I've moved the mouse relative to the center of the screen. This works perfectly if I don't rotate the texture, but once I do rotate it the direction of the scrolling gets messed up for some reason. public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; int x; int y; float z = 250f; Texture2D Overlay; Texture2D RotatingBackground; Rectangle? sourceRectangle; Color color; float rotation; Vector2 ScreenCenter; Vector2 Origin; Vector2 scale; Vector2 Direction; SpriteEffects effects; float layerDepth; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { graphics.PreferredBackBufferWidth = 1024; graphics.PreferredBackBufferHeight = 768; graphics.ApplyChanges(); Direction = Vector2.Zero; IsMouseVisible = true; ScreenCenter = new Vector2(graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2); Mouse.SetPosition((int)graphics.PreferredBackBufferWidth / 2, (int)graphics.PreferredBackBufferHeight / 2); sourceRectangle = null; color = Color.White; rotation = 0.0f; scale = new Vector2(1.0f, 1.0f); effects = SpriteEffects.None; layerDepth = 1.0f; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); Overlay = Content.Load<Texture2D>("Overlay"); RotatingBackground = Content.Load<Texture2D>("Background"); Origin = new Vector2((int)RotatingBackground.Width / 2, (int)RotatingBackground.Height / 2); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { float timePassed = (float)gameTime.ElapsedGameTime.TotalSeconds; MouseState ms = Mouse.GetState(); Vector2 MousePosition = new Vector2(ms.X, ms.Y); Direction = ScreenCenter - MousePosition; if (Direction != Vector2.Zero) { Direction.Normalize(); } x += (int)(Direction.X * z * timePassed); y += (int)(Direction.Y * z * timePassed); //No rotation = texture scrolls as intended, With rotation = texture no longer scrolls in the direction of the mouse. My update method needs to somehow compensate for this. //rotation += 0.01f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.LinearWrap, null, null); spriteBatch.Draw(RotatingBackground, ScreenCenter, new Rectangle(x, y, RotatingBackground.Width, RotatingBackground.Height), color, rotation, Origin, scale, effects, layerDepth); spriteBatch.Draw(Overlay, Vector2.Zero, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • University teaches DOS-style C++, how to deal with it

    - by gaidal
    Half a year ago I had a look at available programming educations. I chose this one because unlike most of the choices: The majority of the courses seemed to be about something concrete and useful; the languages used are C++ and Java which are platform-independent; later courses include developing for mobile devices and a course on Android development, which seemed modern and relevant. Now after two introductory courses we're just starting with C++, and my programming professor seems a bit weird. He's tested us on things like "why should you use constants" and "why are globals bad" in a kind of mechanical way, without much context, before teaching actual programming. His handouts use system("pause"), system("cls"), and getch() from some conio.h that seems ancient according to what I've read. I just did a task that was about printing the "ASCII letters from 32 to 255" (huh?), with an example picture showing a table with Windows' Extended ASCII - of course I got other results for 128-255 on my Arch Linux that uses Unicode, and this isn't mentioned at all. I don't know, it just doesn't seem right... As if he is teaching programming because he has to, perhaps? Should I bring such things up? Hmm. I was looking forward to learning from someone who really knows stuff, and in an academic, rigorous way, like SICP or something. Aren't professors in programming supposed to be like that? I studied math for a while and every teacher and assistant there were really precise about what they said, but this is my second programming teacher that is sort of disappointing. Oh well. Now, question: Is this what to expect from universities or Not OK, and how do I deal with it? I have never touched the language C++ (or C) until now, and am not the right person to jump up and say "This is So Wrong!", so if I google something and find 10 people who say "xxx is blasphemy", how do I skillfully communicate this? I do think it would be better for those classmates who are total beginners not to learn bad habits (such as these vibes of total ignorance of other platforms!) during the upcoming courses, but don't want to disrespect the teacher. I don't know if it's reasonable or just cocky to bring up things like "what about other platforms?" or "but what about this article or stackoverflow answer that I read that said..." for every assignment? Or, if he keeps ignoring non-Windows-programming, should I give up and focus on my own projects or somehow argue that this really isn't OK nowadays? Are there any programming teachers out there, what do you think? By the way these are web-based courses, all interaction between teachers and students takes place in a forum. EDIT: A few answers seem to be making some incorrect assumptions, so maybe I should add a few things. I have been doing programming for fun on and off for 10 years, am pretty comfortable in 3 languages and read programming blogs et c regularly. Also, I feel kind of done being a student, having a degree in another field. I just need another, relevant diploma to work as a programmer, so I'm going back for that. Studying computer science for 5 years is not for me anymore, even though I enjoy learning and solving problems in my free time. Second, let me highlight that I don't expect it to be like the industry at all, quite the contrary. I expect it to be academic, dry and unnecessarily correct. No, it's not just math. Every professor I have had in math, or Japanese (major) or Chinese (minor) have been very very academic, discussing subtle points for hours with passion. But the courses I'm taking now and a previous one in programming don't seem serious. They neither resemble industry NOR academia. That is the problem. And it's not because I can't learn programming anyway. Third, I don't necessarily want to learn C++ or Android development, and I know I could teach myself those and anything else if I wanted to. But I am going back to school anyway, and those platform-independent languages and mobile stuff made me think that maybe they're serious about teaching something relevant here. Seems like I got this wrong, but we'll see.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >