Search Results

Search found 4999 results on 200 pages for 'derived instances'.

Page 111/200 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Why are data structures so important in interviews?

    - by Vamsi Emani
    I am a newbie into the corporate world recently graduated in computers. I am a java/groovy developer. I am a quick learner and I can learn new frameworks, APIs or even programming languages within considerably short amount of time. Albeit that, I must confess that I was not so strong in data structures when I graduated out of college. Through out the campus placements during my graduation, I've witnessed that most of the biggie tech companies like Amazon, Microsoft etc focused mainly on data structures. It appears as if data structures is the only thing that they expect from a graduate. Adding to this, I see that there is this general perspective that a good programmer is necessarily a one with good knowledge about data structures. To be honest, I felt bad about that. I write good code. I follow standard design patterns of coding, I do use data structures but at the superficial level as in java exposed APIs like ArrayLists, LinkedLists etc. But the companies usually focused on the intricate aspects of Data Structures like pointer based memory manipulation and time complexities. Probably because of my java-ish background, Back then, I understood code efficiency and logic only when talked in terms of Object Oriented Programming like Objects, instances, etc but I never drilled down into the level of bits and bytes. I did not want people to look down upon me for this knowledge deficit of mine in Data Structures. So really why all this emphasis on Data Structures? Does, Not having knowledge in Data Structures really effect one's career in programming? Or is the knowledge in this subject really a sufficient basis to differentiate a good and a bad programmer?

    Read the article

  • Can static methods be called using object/instance in .NET

    Ans is Yes and No   Yes in C++, Java and VB.NET No in C#   This is only compiler restriction in c#. You might see in some websites that we can break this restriction using reflection and delegates, but we can’t, according to my little research J I shall try to explain you…   Following is code sample to break this rule using reflection, it seems that it is possible to call a static method using an object, p1 using System; namespace T {     class Program     {         static void Main()         {             var p1 = new Person() { Name = "Smith" };             typeof(Person).GetMethod("TestStatMethod").Invoke(p1, new object[] { });                     }         class Person         {             public string Name { get; set; }             public static void TestStatMethod()             {                 Console.WriteLine("Hello");             }         }     } } but I do not think so this method is being called using p1 rather Type Name “Person”. I shall try to prove this… look at another example…  Test2 has been inherited from Test1. Let’s see various scenarios… Scenario1 using System; namespace T {     class Program     {         static void Main()         {             Test1 t = new Test1();            typeof(Test2).GetMethod("Method1").Invoke(t,                                  new object[] { });         }     }     class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }       class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } } Output:   At test1::Method2 Scenario2         static void Main()         {             Test2 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                                          new object[] { });         }   Output:   At test1::Method2   Scenario3         static void Main()         {             Test1 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                             new object[] { });         }   Output: At test1::Method2 In all above scenarios output is same, that means, Reflection also not considering the object what you pass to Invoke method in case of static methods. It is always considering the type which you specify in typeof(). So, what is the use passing instance to “Invoke”. Let see below sample using System; namespace T {     class Program     {         static void Main()         {            typeof(Test2).GetMethod("Method1").                Invoke(null, new object[] { });         }     }       class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }     class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } }   Output is   At test1::Method2   I was able to call Invoke “Method1” of Test2 without any object.  Yes, there no wonder here as Method1 is static. So we may conclude that static methods cannot be called using instances (only in c#) Why Microsoft has restricted it in C#? Ans: Really there Is no use calling static methods using objects because static methods are stateless. but still Java and C++ latest compilers allow calling static methods using instances. Java sample class Test {      public static void main(String str[])      {            Person p = new Person();            System.out.println(p.GetCount());      } }   class Person {   public static int GetCount()   {      return 100;   } }   Output          100 span.fullpost {display:none;}

    Read the article

  • Fixing my SQL Directory NTFS ACLS

    - by Shawn Cicoria
    I run my development server by boot to VHD (Windows Server 2008 R2 x64).  In that instance, I also have an attached VHD (I attach via script at boot up time using Task Scheduler).  That VHD I have my SQL instances installed. So, the other day, acting hasty, I chmod my ACLS – wow, what a day after that. So, in order to fix it I created this set of BAT commands that resets it back to operational state – not 100% of all what you get, I also didn’t want to run a “repair” – but, all operational again. setlocal SET Inst100Path=H:\Program Files\Microsoft SQL Server\100 REM GOTO SQLE SET InstanceName=MSSQLSERVER SET InstIdPath=H:\Program Files\Microsoft SQL Server\MSSQL10.%InstanceName% SET Group=SQLServerMSSQLUser$SCICORIA-HV1$%InstanceName% SET AgentGroup=SQLServerSQLAgentUser$SCICORIA-HV1$%InstanceName% ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%Group%":(OI)(CI)FX ICACLS "%InstIdPath%\MSSQL\backup" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\data" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\FTdata" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Jobs" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%Inst100Path%\shared\Errordumps" /T /Q /grant "%Group%":(OI)(CI)RXW ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%AgentGroup%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%AgentGroup%":(OI)(CI)RX REM THIS IS THE SQL EXPRESS INSTANCE :SQLE SET InstanceName=SQLEXPRESS SET InstIdPath=H:\Program Files\Microsoft SQL Server\MSSQL10.%InstanceName% SET Group=SQLServerMSSQLUser$SCICORIA-HV1$%InstanceName% SET AgentGroup=SQLServerSQLAgentUser$SCICORIA-HV1$%InstanceName% ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%Group%":(OI)(CI)FX ICACLS "%InstIdPath%\MSSQL\backup" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\data" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\FTdata" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Jobs" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%Group%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%Group%":(OI)(CI)RX ICACLS "%Inst100Path%\shared\Errordumps" /T /Q /grant "%Group%":(OI)(CI)RXW ICACLS "%InstIdPath%\MSSQL" /T /Q /grant "%AgentGroup%":(OI)(CI)RX ICACLS "%InstIdPath%\MSSQL\binn" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%InstIdPath%\MSSQL\Log" /T /Q /grant "%AgentGroup%":(OI)(CI)F ICACLS "%Inst100Path%" /T /Q /grant "%AgentGroup%":(OI)(CI)RX endlocal

    Read the article

  • General Overview of Design Pattern Types

    Typically most software engineering design patterns fall into one of three categories in regards to types. Three types of software design patterns include: Creational Type Patterns Structural Type Patterns Behavioral Type Patterns The Creational Pattern type is geared toward defining the preferred methods for creating new instances of objects. An example of this type is the Singleton Pattern. The Singleton Pattern can be used if an application only needs one instance of a class. In addition, this singular instance also needs to be accessible across an application. The benefit of the Singleton Pattern is that you control both instantiation and access using this pattern. The Structural Pattern type is a way to describe the hierarchy of objects and classes so that they can be consolidated into a larger structure. An example of this type is the Façade Pattern.  The Façade Pattern is used to define a base interface so that all other interfaces inherit from the parent interface. This can be used to simplify a number of similar object interactions into one single standard interface. The Behavioral Pattern Type deals with communication between objects. An example of this type is the State Design Pattern. The State Design Pattern enables objects to alter functionality and processing based on the internal state of the object at a given time.

    Read the article

  • Oracle VM Templates Available for E-Business Suite 12.1.3

    - by Steven Chan (Oracle Development)
    Oracle VM has matured into a formidable virtualization product over the years. Oracle E-Business Suite is certified to run production instances on both Oracle VM 2 and 3. This applies to EBS Releases 11i and 12.  It also applies to future Oracle VM 3 updates, including subsequent Oracle VM 3.x releases. E-Business Suite 12.1.3 Oracle VM templates available now The latest EBS 12.1.3 templates for Oracle VM can be downloaded here: Oracle VM Templates: E-Business Suite Templates are available for: E-Business Suite 12.1.3 Vision (64-bit) E-Business Suite 12.1.3 Production (32-bit) E-Business Suite 12.x Sparse Middle Tiers (32-bit and 64-bit) Should EBS 11i users care? Yes.  You can use these templates to get an EBS 12 testbed environment running in minutes.  This is a great way of giving your end-users a chance to work with EBS 12 without the overhead of building an environment from scratch. References Oracle VM 3 supports a number of guest operating systems including various flavors and versions of Linux, Solaris and Windows. For information regarding certified platforms, installation and upgrade guidance and prerequisite requirements please refer to the Certifications tab on My Oracle Support as well as the following documentation: Oracle VM Installation and Upgrade Guide  Introduction to Oracle VM, Oracle VM Manager and EBS template deployment (Note 1355641.1) Related Articles Oracle VM 3 Certified with Oracle E-Business Suite Support Policies for Virtualization Technologies and Oracle E-Business Suite The Scoop: Oracle E-Business Suite Support on 64-bit Linux

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • Slide-decks from recent Adelaide SQL Server UG meetings

    - by Rob Farley
    The UK has been well represented this summer at the Adelaide SQL Server User Group, with presentations from Chris Testa-O’Neill (isn’t that the right link? Maybe try this one) and Martin Cairney. The slides are available here and here. I thought I’d particularly mention Martin’s, and how it’s relevant to this month’s T-SQL Tuesday. Martin spoke about Policy-Based Management and the Enterprise Policy Management Framework – something which is remarkably under-used, and yet which can really impact your ability to look after environments. If you have policies set up, then you can easily test each of your SQL instances to see if they are still satisfying a set of policies as defined. Automation (the topic of this month’s T-SQL Tuesday) should mean that your life is made easier, thereby enabling to you to do more. It shouldn’t remove the human element, but should remove (most of) the human errors. People still need to manage the situation, and work out what needs to be done, etc. We haven’t reached a point where computers can replace people, but they are very good at replace the mundaneness and monotony of our jobs. They’ve made our lives more interesting (although many would rightly argue that they have also made our lives more complex) by letting us focus on the stuff that changes. Martin named his talk Put Your Feet Up, which nicely expresses the fact that managing systems shouldn’t be about running around checking things all the time. It must be about having systems in place which tell you when things aren’t going well. It’s never quite as simple as being able to actually put your feet up, but certainly no system should require constant attention. It’s definitely a policy we at LobsterPot adhere to, whether it’s an alert to let us know that an ETL package has run successfully, or a script that generates some code for a report. If things can be automated, it reduces the chance of error, reduces the repetitive nature of work, and in general, keeps both consultants and clients much happier.

    Read the article

  • Keep basic game physics separate from basic game object? [on hold]

    - by metamorphosis
    If anybody has dealt with a similar situation I'd be interested in your experience/wisdom, I'm developing a 2D game library in C++, I have game objects which have very basic physics, they also have movement classes attached to differing states, for example, a different movement type based on whether the character is jumping, on ice, whatever. In terms of storing velocity and acceleration impulses, are they best held by the object? Or by the associated movement class? The reason I ask is that I can see advantages to both approaches- if you store physics data in the movement class, you have to pass physics information between class instances when a state change occurs (ie. impulses, gravity etc) but the class has total control over whether those physics are updated or not. An obvious example of how this would be useful was if an object was affected by something which caused it to ignore gravity, or something like that. on the other hand if you store the physics data in the object class, it feels more logical, you don't have to go around passing physics impulses and gravity etc, however the control that the movement class has over the object's physics becomes more convoluted. Basically the difference is between: object->physics stacks (acceleration impulses etc) ->physics functions ->movement type <-movement type makes physics function calls through object and object->movement type->physics stacks ->physics functions ->object forwards external physics calls onto movement type ->object transfers physics stacks between movement types when state change occurs Are there best practices here?

    Read the article

  • SyncToBlog #10 Lots of Azure and Cloud Links including MIX10 videos

    - by Eric Nelson
    Just getting a few interesting cloud links “down on paper”. I last did one of these on Azure in Feb 20010. Cloud Links: Article on Debugging in the Cloud http://code.msdn.microsoft.com/azurescale  A sample app that demonstrates monitoring and automatically scaling an Azure application in response to dropping performance etc. Basically a console app that checks perf stats and then uses the Service Management API to spin up new instances when needed. Azure In Action book is imminent :) Running Memcached in Windows Azure from the MS UK team Using Microsoft Codename Dallas as a data source for Drupal also from the MS UK team I often mention them – but this post is the biz! Metodi on fault and upgrade domains Detailed blog post on comparing Azure AppFabric Service Bus REST support to the free Faye Ruby+JavaScript gem that implements the JSON publish/subscribe protocol Bayeux. AppFabric LABS allow you to test out and play with experimental AppFabric technologies. Details of the upcoming VM support in Windows Azure Nice series of posts from J D Meier in the Patterns and Practice team How To Use ASP.NET Forms Auth with Azure Tables  How To Use ASP.NET Forms Auth with Roles in Azure Tables How To Use ASP.NET Forms Auth with SQL Server on Windows Azure And sessions from MIX10 held March 15th to 17th: Lap around the Windows Azure Platform – Steve Marx Building and Deploying Windows Azure Based Applications with Microsoft Visual Studio 2010 – Jim Nakashima Building PHP Applications using the Windows Azure Platform – Craig Kitterman, Sumit Chawla Using Ruby on Rails to Build Windows Azure Applications – Sriram Krishnan Microsoft Project Code Name “Dallas": Data for your apps – Moe Khosravy Using Storage in the Windows Azure Platform – Chris Auld Building Web Applications with Windows Azure Storage – Brad Calder Building Web Application with Microsoft SQL Azure – David Robinson Connecting Your Applications in the Cloud with Windows Azure AppFabric – Clemens Vasters Microsoft Silverlight and Windows Azure: A Match Made for the Web – Matt Kerner Something for everyone :)

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • Entity Framework &amp; Transactions

    - by Sudheer Kumar
    There are many instances we might have to use transactions to maintain data consistency. With Entity Framework, it is a little different conceptually. Case 1 – Transaction b/w multiple SaveChanges(): here if you just use a transaction scope, then Entity Framework (EF) will use distributed transactions instead of local transactions. The reason is that, EF closes and opens the connection when ever required only, which means, it used 2 different connections for different SaveChanges() calls. To resolve this, use the following method. Here we are opening a connection explicitly so as not to span across multipel connections.   using (TransactionScope ts = new TransactionScope()) {     context.Connection.Open();     //Operation1 : context.SaveChanges();     //Operation2 :  context.SaveChanges()     //At the end close the connection     ts.Complete(); } catch (Exception ex) {       //Handle Exception } finally {       if (context.Connection.State == ConnectionState.Open)       {            context.Connection.Close();       } }   Case 2 – Transaction between DB & Non-DB operations: For example, assume that you have a table that keeps track of Emails to be sent. Here you want to update certain details like DataSent once if the mail was successfully sent by the e-mail client. Email eml = GetEmailToSend(); eml.DateSent = DateTime.Now; using (TransactionScope ts = new TransactionScope()) {    //Update DB    context.saveChanges();   //if update is successful, send the email using smtp client   smtpClient.Send();   //if send was successful, then commit   ts.Complete(); }   Here since you are dealing with a single context.SaveChanges(), you just need to use the TransactionScope, just before saving the context only.   Hope this was helpful!

    Read the article

  • Slide-decks from recent Adelaide SQL Server UG meetings

    - by Rob Farley
    The UK has been well represented this summer at the Adelaide SQL Server User Group, with presentations from Chris Testa-O’Neill (isn’t that the right link? Maybe try this one) and Martin Cairney. The slides are available here and here. I thought I’d particularly mention Martin’s, and how it’s relevant to this month’s T-SQL Tuesday. Martin spoke about Policy-Based Management and the Enterprise Policy Management Framework – something which is remarkably under-used, and yet which can really impact your ability to look after environments. If you have policies set up, then you can easily test each of your SQL instances to see if they are still satisfying a set of policies as defined. Automation (the topic of this month’s T-SQL Tuesday) should mean that your life is made easier, thereby enabling to you to do more. It shouldn’t remove the human element, but should remove (most of) the human errors. People still need to manage the situation, and work out what needs to be done, etc. We haven’t reached a point where computers can replace people, but they are very good at replace the mundaneness and monotony of our jobs. They’ve made our lives more interesting (although many would rightly argue that they have also made our lives more complex) by letting us focus on the stuff that changes. Martin named his talk Put Your Feet Up, which nicely expresses the fact that managing systems shouldn’t be about running around checking things all the time. It must be about having systems in place which tell you when things aren’t going well. It’s never quite as simple as being able to actually put your feet up, but certainly no system should require constant attention. It’s definitely a policy we at LobsterPot adhere to, whether it’s an alert to let us know that an ETL package has run successfully, or a script that generates some code for a report. If things can be automated, it reduces the chance of error, reduces the repetitive nature of work, and in general, keeps both consultants and clients much happier.

    Read the article

  • ANTS Memory Profiler 8 released!

    - by Ben Emmett
    I’m excited to say that we’ve just released ANTS Memory Profiler 8! The big news is support for profiling .NET’s usage of unmanaged memory. There are two main parts to this. Firstly you can see a breakdown of unmanaged memory usage by module. This lets you see at a high level where unmanaged memory is being used – for example in the image below, it’s being used by a PDF generation library. Separately, when looking at a list of .NET classes, you can see how much unmanaged memory those classes are responsible for holding on to. You can also see that information for individual instances of those classes. Some clues you might need this: You’re using system objects or 3rd party components which deal with unmanaged memory under the hood (this includes things like the GDI+ functions used for working with bitmaps) Your application still relies on some legacy Delphi / C++ / etc code from left over from the days before your company moved over to using .NET You’ve used a previous version of ANTS Memory Profiler, and have ever seen a pie chart that looks something like this: You’ll also notice that the startup process has been entirely redesigned, bringing it in line with ANTS Performance Profiler 8, which was released earlier in the year. This makes it faster to start profiling and to run repeat profiling sessions, lets you profile using any browser instead of Internet Explorer, and also provides a host of stability improvements, particularly when launching websites in IIS. Download the new version (there’s a free trial), and as always I’d love to know what you think – just email [email protected]. Cheers! Ben

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • Rackspace Cloud Servers in Europe?

    - by mit
    We have setup a cloud virtual server at rackspace in the US, but we use it from Europe. I found out I am not quite happy with the response time. Of course I knew that there would be some latency. But I am not sure if it is the overseas latency (ping is 120ms) or also the minimal resources. It is the smallest machine, 256 MB, 10 GB, running a Mediawiki on Ubuntu 10.04 64 bit. The Instance lives in the rackspace ORD1 datacenter. As soon as they have opened their new facilities in the UK we plan moving the incstance there. But we are planing more machines already. The pricing is quite attractive. I don't really want to do some measuring and benchmarking and this stuff, so I am asking just for your opinions and it would be nice to hear what you can tell from your experience. Maybe someone who uses such small instances in the US. And what can we really expect if we upgrade to more resources.

    Read the article

  • Android Design - Service vs Thread for Networking

    - by Nevyn
    I am writing an Android app, finally (yay me) and for this app I need persistant, but user closeable, network sockets (yes, more than one). I decided to try my hand at writing my own version of an IRC Client. My design issue however, is I'm not sure how to run the Socket connectivity itself. If I put the sockets at the Activity level, they keeps getting closed shortly after the Activity becomes non-visible (also a problem that needs solving...but I think i figured that one out)...but if I run a "connectivity service", I need to find out if I can have multiple instances of it running (the service, that is...one per server/socket). Either that or a I need a way to Thread the sockets themselves and have multiple threads running that I can still communicate with directly (ID system of some sort). Thus the question: Is it a 'better', or at least more "proper" design pattern, to put the Socket and networking in a service, and have the Activities consume said service...or should I tie the sockets directly to some Threaded Process owned by the UI Activity and not bother with the service implementation at all? I do know better than to put the networking directly on the UI thread, but that's as far as I've managed to get.

    Read the article

  • General directions on developing a server side control system for JS/Canvas Action RPG

    - by Billy Ninja
    Well, yesterday I asked on anti-cheat JS, and confirmed what I kind of already knew that it's just not possible. Now I wanna measure roughly how hard it is to implement a server side checking that is agnostic to client input, that does not mess with the game experience so much. I don't wanna waste to much resource on this matter, since it's going to be initially a single player game, that I may or would like to introduce some kind of ranking, trading system later on. I'd rather deliver better more cool game features instead. I don't wanna have to guarantee super fast server response to keep the game going lag free. I'd rather go with more loose discrete control of key variables and instances. Like store user's action on a fifo buffer on the client, and push that actions to the server gradually. I'd love to see a elegant, generic solution that I could plug into my client game logic root (not having to scatter treatments everywhere in my client js) - and have few classes on Node.js server that could handle that - without having to mirror/describe all of my game entities a second time on the server.

    Read the article

  • Apache mod_rewrite for multiple domains to SSL

    - by Aaron Vegh
    Hi there, I'm running a web service that will allow people to create their own "instances" of my application, running under their own domain. These people will create an A record to forward a subdomain of their main domain to my server. The problem is that my server runs everything under SSL. So in my configuration for port 80, I have the following: <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule /(.*) https://mydomain.com/$1 [R=301] </VirtualHost> This has worked well to forward all requests from the http: to https: domain. But as I said, I now need to let any domain automatically forward to the secure version of itself. Is there a rewrite rule that will let me take the incoming domain and rewrite it to the https version of same? So that the following matches would occur: http://some.otherdomain.com -> https://some.otherdomain.com http://evenanotherdomain.com -> https://evenanotherdomain.com Thanks for your help! Apache mod_rewrite makes my brain hurt. Aaron.

    Read the article

  • How to fix VMware Workstation 9 installation on ubuntu 12.10?

    - by Alessandro Belloni
    I have opened this thread because I upgraded to ubuntu 12.10 beta (kernel 3.5) and I have a problem with VMware Workstation 9: Unable to change virtual machine power state: Cannot find a valid peer process to connect to Does anyone have the same problem? This is a clean install of Ubuntu 12.10 (daily build). I installed VMware 9 and patched but it's not working. I can't patch correctly and get the things to build correctly. My laptop is a Lenovo T420 with Nvidia Optimus Technology. This message is shown when I try to apply the patch: Stopping VMware services: VMware Authentication Daemon done At least one instance of VMware VMX is still running. Please stop all running instances of VMware VMX first. VMware Authentication Daemon done Unable to stop services How can I stop the VMware services to apply the patch? This message is shown when I try to patch again: ./patch-modules_3.5.0.sh /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting But VMware is not working, and I can’t uninstall.

    Read the article

  • Revamped Google Webmaster Tools

    With a positive surprise I realized today that Google's Webmaster Tools had some minor overhauling and provide some more details than before. Most obvious are the changes on the dashboard where the Top Search Queries now provide information about impressions and clicktroughs instead of the rankings before. Only the links of the search expressions are missing. It seems that the Top search queries were in the focus of this update. The section is now spiced with detailed graphs about what happened during selectable periods on your site. Well, seems that the Webmaster Tools mimic a stripped-down version of Google Analytics... I was very pleased by the details that are offered when you click on a single query term. Really nice to see the search rankings and your responsible URLs at the same time. Before, you had to put two browser instances side-by-side to achieve this kind of overview. Personally, I like the approach to visualize statistics the way Google or other providers do. It gives you a quick and informative overview, and enables you to dig further into details about peaks and lows on your visits, page impressions or clickthroughs.

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • MSDN Subscriber Benefits

    - by kaleidoscope
    Windows Azure Platform offer Introductory MSDN Premium offer Ongoing MSDN Subscription Benefits Windows Azure Compute hours per month 750 hours 250 100 50 Storage 10 GB 7.5 GB 5 GB 3 GB Transactions per month 1,000,000 750,000 500,000 300,000 AppFabric Service bus messages per month 1,000,000 1,000,000 500,000 300,000 SQL Azure Web Edition (1GB databases) 3 3 2 1 Data Transfers per month Europe and North America 7 GB in / 14 GB out 5 GB in / 10 GB out 3 GB in / 6 GB out 2 GB in / 4 GB out Asia Pacific 2.5 GB in / 5 GB out 2 GB in / 4 GB out 1 GB in / 2 GB out .5 GB in / 1 GB out Available for sign-up January 4, 2010* After completion of your 8 month introductory Windows Azure benefit Duration of benefit 8 months While MSDN Subscription remains active Subscription levels receiving benefit** MSDN Premium & BizSpark Visual Studio Ultimate with MSDN & BizSpark Visual Studio Premium with MSDN Visual Studio Professional with MSDN Estimated Retail Value: $1038 (8 months) $812/year $436/year $223/year This introductory offer will last for 8 months from the time you sign up. After that, you'll cancel your introductory account and sign up for the ongoing MSDN benefit based on your subscription level. The easiest way to cancel your introductory account is to set it to not "auto-renew". Think of "compute" as an instance of your application running in the cloud. So with 750 hours per month, you can keep a single instance running non-stop all month long. Or run 2 compute instances for two weeks a month. Or 4 for a week a piece. Lokesh, M

    Read the article

  • Tip 16 : Open Multiple Documents within Single Application Instance Using C#

    - by StanleyGu
    1.       Using Microsoft Word 2007 as an example, you can open test1.docx and test2.docx at same time. The two documents are opened within single instance of the word application. World application supports command line argument of passing multiple documents. 2.       Again, Using Microsoft Word 2007 as an example, you can open test1.docx first and then test2.docx. The two documents are opened within single instance of the Word application. Word application supports Multiple Document Interface (MDI). 3.       Using Notepad as an example, you receive error message of “The filename, directory name, or volume label syntax is incorrect” if you want to open two documents at the same time. Notepad does not support command line argument of passing multiple documents 4.       Again, using Notepad as an example, you can open test1.txt first and then test2.txt. The two documents are opened to two different instances of Notepad application. Notepad does not support Multiple Document Interface (MDI). 5.       In conclusion, there is nothing you can do trying to rely on System.Diagnostics.Process class to open multiple documents within a single instance of an application because it is controlled by the application itself. The best approach is to read any developer or user guide of the application and make sure: 1. The application supports Multiple Document Interface (MDI) 2. The application provides command line argument of passing multiple documents. Then, you can use Process class and the command line argument syntax to open multiple documents for the application.  

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >