Search Results

Search found 9236 results on 370 pages for 'discrete space'.

Page 117/370 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • How does java.util.Collections.contains() perform faster than a linear search?

    - by The111
    I've been fooling around with a bunch of different ways of searching collections, collections of collections, etc. Doing lots of stupid little tests to verify my understanding. Here is one which boggles me (source code further below). In short, I am generating N random integers and adding them to a list. The list is NOT sorted. I then use Collections.contains() to look for a value in the list. I intentionally look for a value that I know won't be there, because I want to ensure that the entire list space is probed. I time this search. I then do another linear search manually, iterating through each element of the list and checking if it matches my target. I also time this search. On average, the second search takes 33% longer than the first one. By my logic, the first search must also be linear, because the list is unsorted. The only possibility I could think of (which I immediately discard) is that Java is making a sorted copy of my list just for the search, but (1) I did not authorize that usage of memory space and (2) I would think that would result in MUCH more significant time savings with such a large N. So if both searches are linear, they should both take the same amount of time. Somehow the Collections class has optimized this search, but I can't figure out how. So... what am I missing? import java.util.*; public class ListSearch { public static void main(String[] args) { int N = 10000000; // number of ints to add to the list int high = 100; // upper limit for random int generation List<Integer> ints; int target = -1; // target will not be found, forces search of entire list space long start; long end; ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 1)... "); if (ints.contains(target)) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); System.out.println(); ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 2)... "); for (Integer i : ints) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); } }

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • How many users can be in a AD LDS group?

    - by ixe013
    Microsoft published the recommended maximum limits for users in an Active Directory group. It basically says : Starting with Windows Server 2003, the ability to replicate discrete changes to linked multivalued properties was introduced as a technology called Linked Value Replication (LVR). and This allows the number of group memberships to exceed the former recommended limit of 5,000 for Windows 2000 or Windows Server 2003 at a forest functional level of Windows 2000. Given the replication meta data below, can anybody tell me what is the maximum number of users a AD-LDS group can hold ? Getting 'CN=Member,CN=Schema,CN=Configuration,CN={67B333FE-ADB4-430D-AAEE-D4CCE4B98A2E}' metadata... 23 entries. AttID Ver Loc.USN Originating DSA Org.USN Org.Time/Date ===== === ======= =============== ======= ============= 0 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 3 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20001 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20002 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 2001e 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20020 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20021 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20032 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200a9 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200c2 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200da 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200e2 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 200e7 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 20119 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 2014e 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 201cc 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90001 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90094 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90095 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 900aa 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 90177 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 9027f 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49 9030e 1 95 8ba30efb-9aa4-4e55-8f7c-268e3dcc536b 95 2012-07-17 14:25:49

    Read the article

  • Surface RT: To Be Or Not To Be (Part 1)

    - by smehaffie
    So the Surface RT has been out for 9 months and Microsoft just declared a $900 million dollar write-down. So how did this happen and what does it mean for Microsoft’s efforts to break into the tablet market? I have been thinking a lot about most of the information below since the Surface product line was released. If you are looking for a “Microsoft Is Dead” story, then don’t read any further. But if you want an honest look at what I think led Microsoft to this point and what I think can be done to make Surface RT devices better, then please continue reading. What Led Microsoft To The $900 Million Write-Down Surface Unveiling:Microsoft totally missed the boat when they unveiled the Surface product line on June 18th, 2012. Microsoft should’ve been ready to post the specifications of both devices that night. Microsoft should’ve had a site up and running right after the event so people could pre-order the devices. This would have given them a good idea what the interest was in each device.  They could also have used this data to make a better estimate for the number of units to to have available for the launch and beyond.  They also lost out on taking advantage of the excitement generated by the Surface RT and Surface Pro announcement. They could have thrown in a free touch keyboard to anyone who pre-ordered. The advertising should have started right after the announcement and gotten bigger as launch day approached. Push for as many pre-order as possible and build excitement for the launch. Actual Launch (Surface RT): By this time all excitement was gone from the initial announcement, except for the Micorsoft faithful. Microsoft should have been ready to sell the Surface in as many markets as possible at launch. The limited market release was a real letdown for a lot of people.  A limited release right after the initial announce is understandable, but not at the official launch of the product. Microsoft overpriced the device and now they are lowering it to what it should have been to start with. The $349 price is within the range I suggested it should be at before pricing was announced. (Surface Tablets: The Price Must Be Right). Limited ordering options online was also a killer. User should have been able to buy the base unit of each device and then add on whatever keyboard they wanted to (this applies more to the Surface Pro).  There should have also been a place where users could order any additional add-ins that they wanted to buy (covers, extra power supplies, etc.) Marketing was better and the dancing “Click In” commercial was cool, but the ads comparing the iPad with Siri should have been on the air from day one of the announcement (or at least the launch).  Consumers want to know why you tablet is better, not just that is has a clickable keyboard and built-in kickstand. They could have also compared it to some of the other mid-range tablets if they had not overprices it to begin with. Stock Applications (Mail, People, Calendar, Music, Video, Reader and IE): This is where Microsoft really blew it. They had all the time in the world to make these applications the best of breed and instead we got applications that seemed thrown together.  Some updates have made these application better, but they are all still lacking in features that should have been there from day one. This did not help to enhance a new users experience any. ** I will admit that the applications that were data driven were first class citizen’s and that makes it even more perplexing why MS could knock it out of the park with the Weather, Travel, Finance, Bing, etc.) and fail so miserably on the core applications users would use the most on a tablet. Desktop on Tablet: The desktop just is so out of place on the tablet  I understand it was needed for Office but think it would have been better to not have the desktop in Windows RT, but instead open up the Office applications in full screen mode, in a desktop shell (same goes for  IE11).That way the user wouldn’t realize they are leaving Metro and going to the desktop. The other option would have been to just not include Office on Windows RT devices. Instead they could have made awesome Widows Store Apps for Word, Excel, OneNote and PowerPoint. In addition, they could have made the stock Mail, People, and Calendar applications contain all the functions that Outlook gives desktop users. Having some of the settings in desktop mode and others under “Change PC Settings” made Windows RT seemed unfinished and rushed to market. What Can Be Done To Make Windows RT Based Tablets Better (At least in my opinion) Either eliminate the desktop all together from Windows RT or at least make the user experience better by hiding the fact the user is running Office/IE in the desktop. Personally I ‘d like them to totally get rid of it and just make awesome Windows Store Application version of Word, Excel PowerPoint & OneNote.  This might also make the OS smaller and give the user more available disk space. I doubt there will ever be a Windows Store App versions of Office, but I still think it is a good idea. Make is so users can easily direct their documents, picture, videos and music to their extra storage and can access these files from the standard libraries.  A user should not have to create a VM on their microSD card or create symbolic links to get this to work properly. Most consumers would not be able to do this. Then users get frustrated when they run out or room on their main storage because nothing is automatically save to their microSD card when saved to libraries.  This is a major bug that needs to be fixed, otherwise Microsoft’s selling point of having a microSD slot is worthless. Allows users to uninstall and re-install any of the Office product that come with the Surface. That way people can free up storage space by uninstalling the Office applications they do not need. Everyone’s needs are different, so make the options flexible. Don’t take up storage space for applications the user will not use. Make the Core applications the “Cream of the Crop” Windows App Store applications. The should set the bar for all other Store applications. Improve performance as much as possible, if it seems to be sluggish on a tablet consumer will not buy it. They need to price the next line of Surface product very aggressive to undercut not only iPad but also Android low end tablets (Nook, Kindle Fire, and Nexus, etc.) Give developers incentives to write quality applications for the devices. Don’t reward developers for cranking out cookie cutter, low quality applications. I’d even suggest Microsoft consider implementing some new store certification guideline to stop these type of applications being published. Allow users to easily move the recover disk “partition between their microSD card and main storage. My Predictions for the Surface RT and Windows RT I honestly think even with all the missteps MS has made since the announcement  about the Surface product line, that they are on the right path. I was excited the Surface tablets when they were announced, and I still am. The truth be told, Windows 8 on a tablet (aka: Windows RT) is better than both iOS and Android. My nephew who is an Apple fan boy told me after he saw and used Windows 8 (he got the beta running on his iPad), that Windows 8 kicked Apples butt as a tablet OS. So there is hope for all Windows RT based tablets. I agree with my nephew and that is why whenever anyone asks me about my Surface, I love showing it off and recommend it. The 6 keys to gaining market share in the tablet market are; Aggressive pricing by both Microsoft and their OEM’s Good quality devices put out by Microsoft and their OEM’s (there are some out there, but not enough) Marketing, Marketing, Marketing from both Microsoft and their OEM’s (Need more ads showing why windows based tablets are better than iPads and Android tablets) Getting Widows tablets in retails stores all over, and giving sales people incentive to sell them. Consumers like to try electronics out before they buy them, and most will listen to what the sales person suggest. Microsoft needs sales people in retail stores directing people to buy windows based tablets over iPads and Android tablets. I think the Microsoft Stores within Best Buy is a good start, but they also need to get prominent displays in Walmart, Target, etc.. Release a smaller form factor Surface, Hopefully the 8”-10” next generation Surface is not a rumor. Make “Surface” the brand name for all Microsoft tablets and hybrid devices that they come out with. They cannot change the name with each new release.  Make Surface synonymous with quality, the same way that iPad  is for Apple. Well, that is my 2 cents on the subject. Let me know your thoughts by leaving a comment below. Soon to follow will be my thought on the Surface Pro, so keep an eye out for it. var addthis_pub="smehaffie"; var addthis_options="email, print, digg, slashdot, delicious, twitter, live, myspace, facebook, google, stumbleupon, newsvine";

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • windows 8.1 does not boot after CHKDSK command

    - by sepehr
    I had problem with Windows 8.1 high Disk usage so in order to solve , searched forums. A voted answer suggested to use CHKDSK command as follow: run command prompt and as administrator type code snippet below: CHKDSK /f /v /b C: (I can not remember accurately) CHKDSK printed : can not run CHKDSK right now , would you like to schedule C drive to be checked next time windows starts? (Y/N) my response was "Y" ,please ! Following this suggestion not only didn't solve my problem as expected but also added another one ! The next time system booted, after windows authentication I just could see a black screen and mouse pointer. So I force shut downed the system and tried to start windows again. This time , windows got stuck in Scanning and Repairing on 22% for around 3 hours so I got tired and forced shut downed again. is CHKDSK source of Scanning and Repairing problem or they are discrete ? is there any hope to overcome this problem without re-installing windows ? can any one else run CHKDSK on newer versions of windows without problem? is CHKDSK effective but inefficient ?(it finally get to end but will take a long time) If yes, how much time does it take? I also have Linux Ubuntu 14.04 installed along side windows.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Consulting: Organizing site/environment documentation for customers?

    - by ewwhite
    Over time, I've taken on consulting and contract engineering work for various clients. More recently, customers are asking for certain types of documentation. These are small businesses and typically do not have dedicated technical staff. Within a single company, Wiki/Confluence/Sharepoint, etc. all make sense as a central repository for documentation and environment information. I struggle with finding a consistent method to deliver the following information to discrete customers. I'm shooting for a process that's more portable, secure and elegant than a simple spreadsheet or the dreaded binder full of outdated information. Important IP addresses, DHCP scope, etc. Network diagram (if needed). Administrative usernames and passwords and management URLs. Software license keys. Support contracts and warranty information. Vendor support contacts and instructions. I know there are other consultants here. Any suggestions or tips on maintaining documentation across multiple environments in a customer-friendly format? How do you do it?

    Read the article

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • What can be done to improve time synchronization on networks with sporadic internet access?

    - by anregen
    I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.

    Read the article

  • AMD processors witn graphics card bundled [closed]

    - by shybovycha
    Sorry for posting this question here - just don't know to which StackExchange website i should be writing. I've heard AMD created processors with video card bundled. So now these processors should work as fast as just usual processors with discrete video card but AMD's ones should use less power and spread less warm. Some googling around gave me the result like "AMD processors of A-series". They were mentioned to be build using that technology i described above. But on the other hand, we have a small rate of publishing and not-very-good quality of AMD drivers. I am a game-developer and web-developer so i need a powerful processor and graphics card and a lot of RAM on board (to make it possible to create a sample Grails application, for example or to create some 3D models in Maya/Cinema4D, for instance). Still i want my battery to be a long-living one, so power usage is a bit critical for me. So, my questions are: are there any processor building technology like i've described and which series they are (if they exist)? which processor shall fit the laptop the best: AMD one or i5/i7 on with nVidia graphics card for the purposes mentioned above?

    Read the article

  • Hard Drive problem: is it the SATA controller or the HDD itself?

    - by Drooling_Sheep
    I have a Samsung 1.5TB hard drive hooked up to an ECS H55H-I mini-ITX motherboard. I have XBMC 10 (modified Ubuntu 10.04) installed for use as an HTPC. The hard drive encounters occasional errors during normal use which cause it to be remounted read-only. I have updated the BIOS on the motherboard, changed the SATA cable and moved it to different ports on the motherboard, installed and re-installed the OS (including different versions of XBMC and generic ubuntu), all to no avail. I recently ran tests both with badblocks -sv and smartctl -t long. Both reported no errors. This makes me think the motherboard or SATA controller is probably the issue. Does anyone know of any further tests I can do to help narrow this down? The processor is a Core i3. I forget the model number but it's one of the 32nm ones with on-package graphics. There's no discrete video card or optical drive. The power supply is a 150W Rosewill (pretty sure) that came with the case.

    Read the article

  • ??11.2 RAC??OCR?Votedisk??ASM Diskgroup?????

    - by Liu Maclean(???)
    ????????Oracle Allstarts??????????ocr?votedisk?ASM diskgroup??11gR2 RAC cluster?????????,????«?11gR2 RAC???ASM DISK Path????»??????,??????CRS??????11.2??ASM???????, ????????????”crsctl start crs -excl -nocrs “; ?????????,??ASM????ocr?????votedisk?????,??11.2????ocr?votedisk???ASM?,?ASM???????ocr?votedisk,?????ocr?votedisk????????cluter??????;???????????CRS????,?????diskgroup??????????,?????????????????? ??:?????????????????ASM LUN DISK,???OCR?????,????????4??????????,???????$GI_HOME,?????????;????votedisk?? ????: ??dd????ocr?votedisk??diskgroup header,??diskgroup corruption: 1. ??votedisk? ocr?? [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a853d6204bbc4feabfd8c73d4c3b3001 (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE a5b37704c3574f0fbf21d1d9f58c4a6b (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 36e5c51ff0294fc3bf2a042266650331 (/dev/asm-diski) [SYSTEMDG] 4. ONLINE af337d1512824fe4bf6ad45283517aaa (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3c4a349e2e304ff6bf64b2b1c9d9cf5d (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). su - grid [grid@vrh1 ~]$ ocrconfig -showbackup PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy vrh1 2012/08/09 01:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup00.ocr vrh1 2012/08/08 21:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup01.ocr vrh1 2012/08/08 17:59:55 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup02.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/day.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available 2. ??????????clusterware ,OHASD crsctl stop has -f 3. GetAsmDH.sh ==> GetAsmDH.sh?ASM disk header????? ????????,????????asm header [grid@vrh1 ~]$ ./GetAsmDH.sh ############################################ 1) Collecting Information About the Disks: ############################################ SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 03:28:13 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected. SQL> SQL> SQL> SQL> SQL> SQL> SQL> 1 0 /dev/asm-diske 1 1 /dev/asm-diskd 2 0 /dev/asm-diskb 2 1 /dev/asm-diskc 2 2 /dev/asm-diskf 3 0 /dev/asm-diskh 3 1 /dev/asm-diskg 3 2 /dev/asm-diski 3 3 /dev/asm-diskj 3 4 /dev/asm-diskk SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options -rw-r--r-- 1 grid oinstall 1048 Aug 9 03:28 /tmp/HC/asmdisks.lst ############################################ 2) Generating asm_diskh.sh script. ############################################ -rwx------ 1 grid oinstall 666 Aug 9 03:28 /tmp/HC/asm_diskh.sh ############################################ 3) Executing asm_diskh.sh script to generate dd dumps. ############################################ -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_3.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_4.dd ############################################ 4) Compressing dd dumps in the next format: (asm_dd_header_all_.tar) ############################################ /tmp/HC/dsk_1_0.dd /tmp/HC/dsk_1_1.dd /tmp/HC/dsk_2_0.dd /tmp/HC/dsk_2_1.dd /tmp/HC/dsk_2_2.dd /tmp/HC/dsk_3_0.dd /tmp/HC/dsk_3_1.dd /tmp/HC/dsk_3_2.dd /tmp/HC/dsk_3_3.dd /tmp/HC/dsk_3_4.dd ./GetAsmDH.sh: line 81: compress: command not found ls: /tmp/HC/*.Z: No such file or directory [grid@vrh1 ~]$ 4. ??dd ?? ??ocr?votedisk??diskgroup [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskh bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00423853 seconds, 247 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskg bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0045179 seconds, 232 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diski bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00469976 seconds, 223 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskj bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00344262 seconds, 305 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskk bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0053518 seconds, 196 MB/s 5. ????????????HAS [root@vrh1 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. ????ocr?votedisk??diskgroup??,??CSS???????,???????: alertvrh1.log [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:41.207 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:56.240 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:11.284 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:26.305 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:41.328 ocssd.log 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmReadDiscoveryProfile: voting file discovery string(/dev/asm*) 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmvDDiscThread: using discovery string /dev/asm* for initial discovery 2012-08-09 03:40:26.662: [ SKGFD][1078700352]Discovery with str:/dev/asm*: 2012-08-09 03:40:26.662: [ SKGFD][1078700352]UFS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskb: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskj: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskh: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskc: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskd: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diske: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskg: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diski: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskk: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]OSS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xdf22a0 from lib :UFS:: for disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xf412a0 from lib :UFS:: for disk :/dev/asm-diskb: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf3a680 from lib :UFS:: for disk :/dev/asm-diskj: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf93da0 from lib :UFS:: for disk :/dev/asm-diskh: 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvDiskVerify: Successful discovery of 0 disks 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvFindInitialConfigs: No voting files found 2012-08-09 03:40:26.667: [ CSSD][1078700352](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds ?????ocr?votedisk??diskgroup?????: 1. ?-excl -nocrs ????cluster,??????ASM?? ????CRS [root@vrh1 vrh1]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'vrh1' CRS-2676: Start of 'ora.mdnsd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'vrh1' CRS-2676: Start of 'ora.gpnpd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh1' CRS-2672: Attempting to start 'ora.gipcd' on 'vrh1' CRS-2676: Start of 'ora.cssdmonitor' on 'vrh1' succeeded CRS-2676: Start of 'ora.gipcd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'vrh1' CRS-2672: Attempting to start 'ora.diskmon' on 'vrh1' CRS-2676: Start of 'ora.diskmon' on 'vrh1' succeeded CRS-2676: Start of 'ora.cssd' on 'vrh1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2672: Attempting to start 'ora.ctssd' on 'vrh1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2676: Start of 'ora.ctssd' on 'vrh1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'vrh1' CRS-2676: Start of 'ora.asm' on 'vrh1' succeeded 2.???ocr?votedisk??diskgroup,??compatible.asm???11.2: [root@vrh1 vrh1]# su - grid [grid@vrh1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 04:16:58 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> create diskgroup systemdg high redundancy disk '/dev/asm-diskh','/dev/asm-diskg','/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.2'; 3.?ocr backup???ocr??ocrcheck??: [root@vrh1 ~]# ocrconfig -restore /g01/11.2.0/grid/cdata/vrh-cluster/backup00.ocr [root@vrh1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3180 Available space (kbytes) : 258940 ID : 1238458014 Device/File Name : +systemdg Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded 4. ????votedisk ,??????????: [grid@vrh1 ~]$ crsctl replace votedisk +SYSTEMDG CRS-4602: Failed 27 to add voting file 2e4e0fe285924f86bf5473d00dcc0388. CRS-4602: Failed 27 to add voting file 4fa54bb0cc5c4fafbf1a9be5479bf389. CRS-4602: Failed 27 to add voting file a109ead9ea4e4f28bfe233188623616a. CRS-4602: Failed 27 to add voting file 042c9fbd71b54f5abfcd3ab3408f3cf3. CRS-4602: Failed 27 to add voting file 7b5a8cd24f954fafbf835ad78615763f. Failed to replace voting disk group with +SYSTEMDG. CRS-4000: Command Replace failed, or completed with errors. ????????ASM???,???ASM: SQL> alter system set asm_diskstring='/dev/asm*'; System altered. SQL> create spfile from memory; File created. SQL> startup force mount; ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance ASM instance started Total System Global Area 283930624 bytes Fixed Size 2227664 bytes Variable Size 256537136 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /g01/11.2.0/grid/dbs/spfile+AS M1.ora [grid@vrh1 trace]$ crsctl replace votedisk +SYSTEMDG CRS-4256: Updating the profile Successful addition of voting disk 85edc0e82d274f78bfc58cdc73b8c68a. Successful addition of voting disk 201ffffc8ba44faabfe2efec2aa75840. Successful addition of voting disk 6f2a25c589964faabf6980f7c5f621ce. Successful addition of voting disk 93eb315648454f25bf3717df1a2c73d5. Successful addition of voting disk 3737240678964f88bfbfbd31d8b3829f. Successfully replaced voting disk group with +SYSTEMDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced 5. ??has??,??cluster????: [root@vrh1 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 85edc0e82d274f78bfc58cdc73b8c68a (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE 201ffffc8ba44faabfe2efec2aa75840 (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 6f2a25c589964faabf6980f7c5f621ce (/dev/asm-diski) [SYSTEMDG] 4. ONLINE 93eb315648454f25bf3717df1a2c73d5 (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3737240678964f88bfbfbd31d8b3829f (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). [root@vrh1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.BACKUPDG.dg ONLINE ONLINE vrh1 ora.DATA.dg ONLINE ONLINE vrh1 ora.LISTENER.lsnr ONLINE ONLINE vrh1 ora.LSN_MACLEAN.lsnr ONLINE ONLINE vrh1 ora.SYSTEMDG.dg ONLINE ONLINE vrh1 ora.asm ONLINE ONLINE vrh1 Started ora.gsd OFFLINE OFFLINE vrh1 ora.net1.network ONLINE ONLINE vrh1 ora.ons ONLINE ONLINE vrh1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr http://www.askmaclean.com 1 ONLINE ONLINE vrh1 ora.cvu 1 OFFLINE OFFLINE ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE vrh1 ora.vprod.db 1 ONLINE OFFLINE 2 ONLINE OFFLINE ora.vrh1.vip 1 ONLINE ONLINE vrh1 ora.vrh2.vip 1 ONLINE INTERMEDIATE vrh1 FAILED OVER

    Read the article

  • ??11.2 RAC??OCR?Votedisk??ASM Diskgroup?????

    - by Liu Maclean(???)
    ????????Oracle Allstarts??????????ocr?votedisk?ASM diskgroup??11gR2 RAC cluster?????????,????«?11gR2 RAC???ASM DISK Path????»??????,??????CRS??????11.2??ASM???????, ????????????”crsctl start crs -excl -nocrs “; ?????????,??ASM????ocr?????votedisk?????,??11.2????ocr?votedisk???ASM?,?ASM???????ocr?votedisk,?????ocr?votedisk????????cluter??????;???????????CRS????,?????diskgroup??????????,?????????????????? ??:?????????????????ASM LUN DISK,???OCR?????,????????4??????????,???????$GI_HOME,?????????;????votedisk?? ????: ??dd????ocr?votedisk??diskgroup header,??diskgroup corruption: 1. ??votedisk? ocr?? [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a853d6204bbc4feabfd8c73d4c3b3001 (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE a5b37704c3574f0fbf21d1d9f58c4a6b (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 36e5c51ff0294fc3bf2a042266650331 (/dev/asm-diski) [SYSTEMDG] 4. ONLINE af337d1512824fe4bf6ad45283517aaa (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3c4a349e2e304ff6bf64b2b1c9d9cf5d (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). su - grid [grid@vrh1 ~]$ ocrconfig -showbackup PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy vrh1 2012/08/09 01:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup00.ocr vrh1 2012/08/08 21:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup01.ocr vrh1 2012/08/08 17:59:55 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup02.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/day.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available 2. ??????????clusterware ,OHASD crsctl stop has -f 3. GetAsmDH.sh ==> GetAsmDH.sh?ASM disk header????? ????????,????????asm header [grid@vrh1 ~]$ ./GetAsmDH.sh ############################################ 1) Collecting Information About the Disks: ############################################ SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 03:28:13 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected. SQL> SQL> SQL> SQL> SQL> SQL> SQL> 1 0 /dev/asm-diske 1 1 /dev/asm-diskd 2 0 /dev/asm-diskb 2 1 /dev/asm-diskc 2 2 /dev/asm-diskf 3 0 /dev/asm-diskh 3 1 /dev/asm-diskg 3 2 /dev/asm-diski 3 3 /dev/asm-diskj 3 4 /dev/asm-diskk SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options -rw-r--r-- 1 grid oinstall 1048 Aug 9 03:28 /tmp/HC/asmdisks.lst ############################################ 2) Generating asm_diskh.sh script. ############################################ -rwx------ 1 grid oinstall 666 Aug 9 03:28 /tmp/HC/asm_diskh.sh ############################################ 3) Executing asm_diskh.sh script to generate dd dumps. ############################################ -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_3.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_4.dd ############################################ 4) Compressing dd dumps in the next format: (asm_dd_header_all_.tar) ############################################ /tmp/HC/dsk_1_0.dd /tmp/HC/dsk_1_1.dd /tmp/HC/dsk_2_0.dd /tmp/HC/dsk_2_1.dd /tmp/HC/dsk_2_2.dd /tmp/HC/dsk_3_0.dd /tmp/HC/dsk_3_1.dd /tmp/HC/dsk_3_2.dd /tmp/HC/dsk_3_3.dd /tmp/HC/dsk_3_4.dd ./GetAsmDH.sh: line 81: compress: command not found ls: /tmp/HC/*.Z: No such file or directory [grid@vrh1 ~]$ 4. ??dd ?? ??ocr?votedisk??diskgroup [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskh bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00423853 seconds, 247 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskg bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0045179 seconds, 232 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diski bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00469976 seconds, 223 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskj bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00344262 seconds, 305 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskk bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0053518 seconds, 196 MB/s 5. ????????????HAS [root@vrh1 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. ????ocr?votedisk??diskgroup??,??CSS???????,???????: alertvrh1.log [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:41.207 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:56.240 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:11.284 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:26.305 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:41.328 ocssd.log 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmReadDiscoveryProfile: voting file discovery string(/dev/asm*) 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmvDDiscThread: using discovery string /dev/asm* for initial discovery 2012-08-09 03:40:26.662: [ SKGFD][1078700352]Discovery with str:/dev/asm*: 2012-08-09 03:40:26.662: [ SKGFD][1078700352]UFS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskb: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskj: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskh: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskc: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskd: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diske: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskg: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diski: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskk: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]OSS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xdf22a0 from lib :UFS:: for disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xf412a0 from lib :UFS:: for disk :/dev/asm-diskb: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf3a680 from lib :UFS:: for disk :/dev/asm-diskj: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf93da0 from lib :UFS:: for disk :/dev/asm-diskh: 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvDiskVerify: Successful discovery of 0 disks 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvFindInitialConfigs: No voting files found 2012-08-09 03:40:26.667: [ CSSD][1078700352](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds ?????ocr?votedisk??diskgroup?????: 1. ?-excl -nocrs ????cluster,??????ASM?? ????CRS [root@vrh1 vrh1]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'vrh1' CRS-2676: Start of 'ora.mdnsd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'vrh1' CRS-2676: Start of 'ora.gpnpd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh1' CRS-2672: Attempting to start 'ora.gipcd' on 'vrh1' CRS-2676: Start of 'ora.cssdmonitor' on 'vrh1' succeeded CRS-2676: Start of 'ora.gipcd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'vrh1' CRS-2672: Attempting to start 'ora.diskmon' on 'vrh1' CRS-2676: Start of 'ora.diskmon' on 'vrh1' succeeded CRS-2676: Start of 'ora.cssd' on 'vrh1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2672: Attempting to start 'ora.ctssd' on 'vrh1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2676: Start of 'ora.ctssd' on 'vrh1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'vrh1' CRS-2676: Start of 'ora.asm' on 'vrh1' succeeded 2.???ocr?votedisk??diskgroup,??compatible.asm???11.2: [root@vrh1 vrh1]# su - grid [grid@vrh1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 04:16:58 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> create diskgroup systemdg high redundancy disk '/dev/asm-diskh','/dev/asm-diskg','/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.2'; 3.?ocr backup???ocr??ocrcheck??: [root@vrh1 ~]# ocrconfig -restore /g01/11.2.0/grid/cdata/vrh-cluster/backup00.ocr [root@vrh1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3180 Available space (kbytes) : 258940 ID : 1238458014 Device/File Name : +systemdg Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded 4. ????votedisk ,??????????: [grid@vrh1 ~]$ crsctl replace votedisk +SYSTEMDG CRS-4602: Failed 27 to add voting file 2e4e0fe285924f86bf5473d00dcc0388. CRS-4602: Failed 27 to add voting file 4fa54bb0cc5c4fafbf1a9be5479bf389. CRS-4602: Failed 27 to add voting file a109ead9ea4e4f28bfe233188623616a. CRS-4602: Failed 27 to add voting file 042c9fbd71b54f5abfcd3ab3408f3cf3. CRS-4602: Failed 27 to add voting file 7b5a8cd24f954fafbf835ad78615763f. Failed to replace voting disk group with +SYSTEMDG. CRS-4000: Command Replace failed, or completed with errors. ????????ASM???,???ASM: SQL> alter system set asm_diskstring='/dev/asm*'; System altered. SQL> create spfile from memory; File created. SQL> startup force mount; ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance ASM instance started Total System Global Area 283930624 bytes Fixed Size 2227664 bytes Variable Size 256537136 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /g01/11.2.0/grid/dbs/spfile+AS M1.ora [grid@vrh1 trace]$ crsctl replace votedisk +SYSTEMDG CRS-4256: Updating the profile Successful addition of voting disk 85edc0e82d274f78bfc58cdc73b8c68a. Successful addition of voting disk 201ffffc8ba44faabfe2efec2aa75840. Successful addition of voting disk 6f2a25c589964faabf6980f7c5f621ce. Successful addition of voting disk 93eb315648454f25bf3717df1a2c73d5. Successful addition of voting disk 3737240678964f88bfbfbd31d8b3829f. Successfully replaced voting disk group with +SYSTEMDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced 5. ??has??,??cluster????: [root@vrh1 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 85edc0e82d274f78bfc58cdc73b8c68a (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE 201ffffc8ba44faabfe2efec2aa75840 (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 6f2a25c589964faabf6980f7c5f621ce (/dev/asm-diski) [SYSTEMDG] 4. ONLINE 93eb315648454f25bf3717df1a2c73d5 (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3737240678964f88bfbfbd31d8b3829f (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). [root@vrh1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.BACKUPDG.dg ONLINE ONLINE vrh1 ora.DATA.dg ONLINE ONLINE vrh1 ora.LISTENER.lsnr ONLINE ONLINE vrh1 ora.LSN_MACLEAN.lsnr ONLINE ONLINE vrh1 ora.SYSTEMDG.dg ONLINE ONLINE vrh1 ora.asm ONLINE ONLINE vrh1 Started ora.gsd OFFLINE OFFLINE vrh1 ora.net1.network ONLINE ONLINE vrh1 ora.ons ONLINE ONLINE vrh1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr http://www.askmaclean.com 1 ONLINE ONLINE vrh1 ora.cvu 1 OFFLINE OFFLINE ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE vrh1 ora.vprod.db 1 ONLINE OFFLINE 2 ONLINE OFFLINE ora.vrh1.vip 1 ONLINE ONLINE vrh1 ora.vrh2.vip 1 ONLINE INTERMEDIATE vrh1 FAILED OVER

    Read the article

  • Failed to install ntp via apt-get in Debian

    - by Petah
    When trying to install ntp (because my server clock is wrong), it just pukes this massive error. Any idea how to fix this? root@pan-prodweb01:~# apt-get install ntp Reading package lists... Done Building dependency tree Reading state information... Done ntp is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 75 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up ntp (1:4.2.6.p2+dfsg-1+b1) ... insserv: warning: script 'S99obmaua' missing LSB tags and overrides insserv: warning: script 'S99obmscheduler' missing LSB tags and overrides insserv: warning: script 'obmscheduler' missing LSB tags and overrides insserv: warning: script 'obmaua' missing LSB tags and overrides insserv: There is a loop between service stop-bootlogd and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service nfs-common at depth 7 insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$aconfigured to not write apport reports ll' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmaua depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting obmscheduler depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Max recursions depth 99 reached insserv: loop involving service tomcat6 at depth 9 insserv: There is a loop between service stop-bootlogd and mountall if started insserv: loop involving service mountall at depth 4 insserv: loop involving service checkfs at depth 3 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: loop involving service networking at depth 6 insserv: There is a loop between service stop-bootlogd and checkroot if started insserv: loop involving service checkroot at depth 5 insserv: loop involving service hostname at depth 4 insserv: loop involving service kbd at depth 12 insserv: loop involving service module-init-tools at depth 6 insserv: There is a loop between service stop-bootlogd and mountoverflowtmp if started insserv: loop involving service mountoverflowtmp at depth 9 insserv: loop involving service mountall-bootclean at depth 8 insserv: There is a loop at service obmaua if started insserv: There is a loop between service obmaua and ifupdown-clean if started insserv: loop involving service ifupdown-clean at depth 6 insserv: There is a loop at service stop-bootlogd if started insserv: loop involving service obmaua at depth 1 insserv: loop involving service mtab at depth 7 insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing ntp (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ntp localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • IIS7 Failure after installing Advanced Logging

    - by Guy Harwood
    I came across a nasty issue when i installed the Advanced Logging feature for IIS7 via the Web Platform Installer on my Windows 2008 Server.  Basically, after installation and reboot none of my sites were working and returned 503 – Internal Server Error. Snooping around in the Event Viewer i found the following error reported by the W3SVC… The Module DLL C:\Program Files\IIS\Advanced Logging\AdvancedLoggingModule.dll failed to load. The data is the error Even though the DLLs are there, it is not picking them up. I managed to find a fix via google that involves editing the configapplicationHost.config file in the C:\Windows\System32\inetsrv\ directory. 1.  Copy AdvancedLoggingModule.dll and ClientLoggingHandler.dll to %windir%\system32 (C:\windows\system32  on a default setup) 2.  Locate the file C:\Windows\System32\inetsrv\configapplicationHost.config and make a backup, then open it in a text editor (i recommend Notepad++). 3.  Search for the following 2 lines (mine are located on line 570).. <add name="ClientLoggingHandler" image="%ProgramFiles%\IIS\Advanced Logging\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%ProgramFiles%\IIS\Advanced Logging\AdvancedLoggingModule.dll" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } and alter them to…. <add name="ClientLoggingHandler" image="%windir%\system32\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%windir%\system32\AdvancedLoggingModule.dll" /> 4. Open a command prompt and run iisReset. 5. All sites should now be working. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Inspire Geek Love with These Hilarious Geek Valentines

    - by Eric Z Goodnight
    Want to send some Geek Love to that special someone? Why not do it with these elementary school throwback valentines, and win their heart this upcoming Valentine’s day—the geek way! Read on to see the simple method to make your own custom Valentines, as well as download a set of eleven ready-made ones any geek guy or gal should be delighted get. It’s amore! How to Make Custom Valentines A size we’ve used for all of our Valentines is a 3” x 4” at 150 dpi. This is fairly low resolution for print, but makes a great graphic to email. With your new image open, Navigate to Edit > Fill and fill your background layer with a rich, red color (or whatever appeals to you.) By setting “Use” to “Foreground color as shown above, you’ll paint whatever foreground color you have in your color picker. Press to select the text tool. Set a few text objects, using whatever fonts appeal to you. Pixel fonts, like this one, are freely downloadable, and we’ve already shared a great list of Valentines fonts. Copy an image from the internet if you’re confident your sweetie won’t mind a bit of fair use of copyrighted imagery. If they do mind, find yourself some great Creative Commons images. to do a free transform on your image, sizing it to whatever dimensions work best for your design. Right click your newly added image layer in your panel and Choose “Blending Effects” to pick a Layer Style. “Stroke” with this setting adds a black line around your image. Also turning on “Outer Glow” with this setting puts a dark black shadow around the top and bottom (and sides, although they are hidden). Add some more text. Double entendre is recommended. Click and hold down on the “Rectangle Tool” to get the “Custom Shape Tool.” The custom shape tool has useful vector shapes built into it. Find the “Shape” dropdown in the menu to find the heart image. Click and drag to create a vector heart shape in your image. Your layers panel is where you can change the color, if it happens to use the wrong one at first. Click the color swatch in your panel, highlighted in blue above. will transform your vector heart. You can also use it to rotate, if you like. Add some details, like this Power or Standby symbol, which can be found in symbol fonts, taken from images online, or drawn by hand. Your Valentine is now ready to be saved as a JPG or PNG and sent to the object of your affection! Keep reading to see a list of 11 downloadable How-To Geek Valentines, including this one and the three from the header image. Download The HTG Set of Valentines Download the HTG Geek Valentines (ZIP) Download the HTG Geek Valentines (ZIP) When he’s not wooing ladies with Valentines cards, you can email the author at [email protected] with your Photoshop and Graphics questions. Your questions may be featured in a future How-To Geek article! Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • How to Manage AutoArchive in Outlook 2010

    - by Mysticgeek
    If you want to keep Outlook 2010 clean and run faster, one method is to set up the AutoArchive feature. Today we show you how to configure and manage the feature in Outlook 2010. Using AutoArchive allows you to manage space in your mailbox or on the email server by moving older items to another location on your hard drive. Enable and Configure Auto Archive In Outlook 2010 Auto Archive is not enabled by default. To turn it on, click on the File tab to access Backstage View, then click on Options. The Outlook Options window opens then click on Advanced then the AutoArchive Settings button. The AutoArchive window opens and you’ll notice everything is grayed out. Check the box next to Run AutoArchive every… Note: If you select the Permanently delete old items option, mails will not be archived. Now you can choose the settings for how you want to manage the AutoArchive feature. Select how often you want it to run, prompt before the feature runs, where to move items, and other actions you want to happen during the process. After you’ve made your selections click OK. Manually Configure Individual Folders For more control over individual folders that are archived, right-click on the folder and click on Properties. Click on the AutoArchive tab and choose the settings you want to change for that folder. For instance you might not want to archive a certain folder or move archived data to a specific folder. If you want to manually archive and backup an item, click on the File tab, Cleanup Tools, then Archive. Click the radio button next to Archive this folder and all subfolders. Select the folder you want to archive. In this example we want to archive this folder to a specific location of its own. The .pst files are saved in your documents folder and if you need to access them at a later time you can. After you’ve setup AutoArchive you can find items in the archived files. In the Navigation Pane expand the Archives folder in the list. You can then view and access your messages. You can also access them by clicking the File tab \ Open then Open Outlook Data File. Then you can browse to the archived file you want to open. Archiving old emails is a good way to help keep a nice clean mailbox, help speed up your Outlook experience, and save space on the email server. The other nice thing is you can configure your email archives and specific folders to meet your email needs. Similar Articles Productive Geek Tips Configure AutoArchive In Outlook 2007Quickly Clean Your Inbox in Outlook 2003/2007Open Different Outlook Features in Separate Windows to Improve ProductivityMake Outlook Faster by Disabling Unnecessary Add-InsCreate an Email Template in Outlook 2003 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3

    Read the article

  • Can't install git on Ubuntu 12.10

    - by Lucas Windir
    I'm following these instructions to install git on my laptop: http://git-scm.com/download/linux When I do: $ sudo apt-get install git-core This is what my terinal shows: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? E: Some packages could not be authenticated lucas@lucas-Inspiron-N5050:~$ sudo apt-get install git-core Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? y Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main liberror-perl all 0.17-1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-man all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git amd64 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-core all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/libe/liberror-perl/liberrorperl_0.17-1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-man_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git_1.7.10.4-1ubuntu1_amd64.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch http://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-core_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? How could I install git on Ubuntu 12.10? I can't even do it from the Ubuntu Software Center. Thanks in advance!

    Read the article

  • Stuck with Apache2

    - by Gundars Meness
    I cant finish Apache2 install, also cannot remove it. It has blocked my dpkg, now I cant get no installations in or out. I even tried distro upgrade, but it did still has broken dpkg.. How to fix this and get normal Apache2 running? Just for the heck of it: gundars@SR528:~$ sudo apt-get remove apache2-common Reading package lists... Done Building dependency tree Reading state information... Done Package 'apache2-common' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up apache2.2-common (2.2.22-6ubuntu2) ... ERROR: Site default does not exist! dpkg: error processing apache2.2-common (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2-mpm-prefork: apache2-mpm-prefork depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2-mpm-prefork (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: apache2.2-common apache2-mpm-prefork E: Sub-process /usr/bin/dpkg returned an error code (1) sudo apt-get -f install apache2 apache2.2-common apache2-mpm-prefork [sudo] password for gundars: Reading package lists... Done Building dependency tree Reading state information... Done apache2 is already the newest version. apache2-mpm-prefork is already the newest version. apache2.2-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up apache2.2-common (2.2.22-6ubuntu2) ... ERROR: Site default does not exist! dpkg: error processing apache2.2-common (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2-mpm-prefork: apache2-mpm-prefork depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2-mpm-prefork (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2: apache2 depends on apache2-mpm-worker (= 2.2.22-6ubuntu2) | apache2-mpm-prefork (= 2.2.22-6ubuntu2) | apache2-mpm-event (= 2.2.22-6ubuntu2) | apache2-mpm-itk (= 2.2.22-6ubuntu2); however: Package apache2-mpm-worker is not installed. Package apache2-mpm-prefork is not configured yet. Package apache2-mpm-event is not installed. Package apache2-mpm-itk is not installed. apache2 depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libapache2-mod-php5: libapache2-mod-php5 depends on apache2-mpm-prefork (>> 2.0.52) | apache2-mpm-itk; however: Package apache2-mpm-prefork is not configured yet. Package apache2-mpm-itk is not installed. libapache2-mod-php5 depends on apache2.2-common; however: Package apache2.2-common is not configured yet. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: error processing libapache2-mod-php5 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: apache2.2-common apache2-mpm-prefork apache2 libapache2-mod-php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • URGENT: Firefox circular-dependency hell - Linux Mint 13 (based on Ubuntu 12.04)

    - by Tyler J Fisher
    Having difficulty re-installing Firefox, after an installation to resolve places.sqlite issues. It appears that I'm trapped in circular dependency hell. Need to resolve firefox dependency hell to attempt to resolve Tomcat6 project dependencies (don't ask), ASAP. Have been trying for hours. What I've done (brief) 1) sudo apt-get purge firefox firefox-globalmenu firefox-gnome-support 2) sudo apt-get update 3) sudo apt-get install firefox firefox-globalmenu firefox-gnome-support 4) sudo apt-get -f install Potential error sources: Found in(sudo apt-get install firefox firefox-globalmenu firefox-gnome-support) dpkg: error processing /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb (--unpack): trying to overwrite '/usr/lib/firefox/extensions', which is also in package mint-search-addon 2012.05.11 So, /usr/lib/firefox/extensions doesn't even EXIST! Deleted /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701 as per recommendations. Errors were encountered while processing: /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Outputs: 1) sudo apt-get purge firefox firefox-globalmenu firefox-gnome-support me@machine ~ $ sudo apt-get purge firefox-gnome-support firefox firefox-globalmenu Reading package lists... Done Building dependency tree Reading state information... Done Package firefox is not installed, so not removed The following packages will be REMOVED: firefox-globalmenu* firefox-gnome-support* 2 not fully installed or removed. 0 upgraded, 0 newly installed, 2 to remove and 38 not upgraded. After this operation, 460 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... dpkg: warning: files list file for package `mysqltuner' missing, assuming package has no files currently installed. (Reading database ... 192642 files and directories currently installed.) Removing firefox-globalmenu ... Removing firefox-gnome-support ... 3) me@machine ~ $ sudo apt-get install firefox firefox-globalmenu firefox-gnome-support Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: latex-xft-fonts The following NEW packages will be installed: firefox firefox-globalmenu firefox-gnome-support 0 upgraded, 3 newly installed, 0 to remove and 38 not upgraded. Need to get 0 B/24.8 MB of archives. After this operation, 54.3 MB of additional disk space will be used. (Reading database ... dpkg: warning: files list file for package `mysqltuner' missing, assuming package has no files currently installed. (Reading database ... 192619 files and directories currently installed.) Unpacking firefox (from .../firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb (--unpack): trying to overwrite '/usr/lib/firefox/extensions', which is also in package mint-search-addon 2012.05.11 Selecting previously unselected package firefox-globalmenu. Unpacking firefox-globalmenu (from .../firefox-globalmenu_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... Selecting previously unselected package firefox-gnome-support. Unpacking firefox-gnome-support (from .../firefox-gnome- support_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for mintsystem ... Errors were encountered while processing: /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701- 0ubuntu1~umd1~precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 4) sudo apt-get -f install 0 upgraded, 0 newly installed, 0 to remove, and 38 not upgraded Ideas? Tomcat6 only deploys my web application successfully in Firefox, not Chrome, so I'm really hoping to resolve this dependency issue.

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >