Search Results

Search found 8501 results on 341 pages for 'status'.

Page 48/341 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • JSR Updates and EC Meeting Tuesday @ 15:00 PST

    - by Heather VanCura
    JSR 310, Date and Time API, has moved to JCP 2.9 (first JCP 2.9 JSR!) JSR 236, Concurrency Utilities for Java EE, has published an Early Draft Review. This review ends 15 December 2012.  Tomorrow, Tuesday 20 November is the last Public EC Meeting of 2012, and the first EC meeting with the merged EC. The second hour of this meeting will be open to the public at 3:00 PM PST. The agenda includes  JSR 355,  EC merge implementation report, JSR 358 (JCP.next.3) status report, JCP 2.8 status update and community audit program.  Details are below. We hope you will join us, but if you cannot attend, not to worry--the recording and materials will also be public on the JCP.org multimedia page. Meeting details Date & Time Tuesday November 20, 2012, 3:00 - 4:00 pm PST Location Teleconference Dial-in +1 (866) 682-4770 (US) Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password: JCPEC Agenda JSR 355 (the EC merge) implementation report JSR 358 (JCP.next.3) status report 2.8 status update and community audit program Discussion/Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate.

    Read the article

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • Named my RPi 512MB @jerpi_bilbo

    - by hinkmond
    To keep our multiple Raspberry Pi boards apart from each other, I've now named my RPi Model B w/512MB: "jerpi_bilbo", which stands for Java Embedded Raspberry Pi - Bilbo (named after the Hobbit from the J.R.R. Tolkien stories). I also, set up a Twitter account for him. You can follow him at: @jerpi_bilbo He's self-tweeting, manual prompted so far (using Java Embedded 7.0 and twitter4j Java library). Works great! I'm setting him up to be automated self-tweeting soon, so watch for that... Here's a pointer to the open source twitter4j Java library: download here Just unzip and extract out the twitter4j-core-2.2.6.jar and put it on your Java Embedded classpath. Here's how @jerpi_bilbo uses it to Tweet with his Java Embedded runtime: import twitter4j.*; import java.io.* public final class Tweet { public static void main(String[] args) { String statusStr = null; if ((args.length 0) && (args[0] != null)) { statusStr = args[0]; } else { statusStr = new String("Hello World!"); } // Create new instance of the Twitter class Twitter twitter = new TwitterFactory().getInstance(); try { Status status = twitter.updateStatus(statusStr); System.out.println ("Successfully updated the status to: " + status.getText()); } catch (Exception e) { e.printStackTrace(); } } } That's all you need. Java Embedded rocks the RPi! And, @jerpi_bilbo is alive... Hinkmond

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • How to handle business rules with a REST API?

    - by Ciprio
    I have a REST API to manage a booking system I'm searching how to manage this situation : A customer can book a time slot : A TimeSlot resource is created and linked to a Person resource. In order to create the link between a time lot and a person, the REST client send a POST request on the TimeSlot resource But if too many people booked the same slot (let's say the limit is 5 links), it must be impossible to create more associations. How can I handle this business restriction ? Can I return a 404 status code with a JSON response detailing the error with a status code ? Is it a RESTFul approach ? EDIT : Like suggested below I used status 409 Conflict in addition to a JSON response detailing the error

    Read the article

  • Kernel, dpkg, sudo and apt-get corrupted

    - by TECH4JESUS
    Here are some errors that I am getting: 1) A proper configuration for Firestarter was not found. If you are running Firestarter from the directory you built it in, run make install-data-local to install a configuration, or simply make install to install the whole program. Firestarter will now close. root@p:/# firestarter ** (firestarter:5890): WARNING **: The connection is closed (firestarter:5890): GnomeUI-WARNING **: While connecting to session manager: None of the authentication protocols specified are supported. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ^C 2) Also I cannot apt-get install sudo root@p:/# apt-get install sudo Reading package lists... Done Building dependency tree Reading state information... Done sudo is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-rb-3.0 gir1.2-gstreamer-0.10 libntfs10 python-mako libdmapsharing-3.0-2 rhythmbox-data libx264-116 rhythmbox libiso9660-7 librhythmbox-core5 libvpx0 libmatroska4 gir1.2-gst-plugins-base-0.10 rhythmbox-mozilla rhythmbox-plugin-zeitgeist libattica0 libgpac0.4.5 python-markupsafe libmusicbrainz4c2a rhythmbox-plugin-cdrecorder rhythmbox-plugins libaudiofile0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 9 not fully installed or removed. Need to get 0 B/76.3 MB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y /bin/sh: 1: /usr/sbin/dpkg-preconfigure: not found (Reading database ... 495741 files and directories currently installed.) Preparing to replace linux-image-3.2.0-24-generic 3.2.0-24.39 (using .../linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Preparing to replace linux-image-3.2.0-25-generic 3.2.0-25.40 (using .../linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Internet doesn't work by default

    - by Adam Martinez
    After upgrading to Precise, I am required to run 'sudo dhclient eth0' in a terminal in order to get the internet to work. Everything worked perfectly fine on Oneiric, so It's really puzzling me. I'm thinking it could possibly be something with the kernel, but who knows. Output of dmesg: [ 0.247891] system 00:01: [io 0x0290-0x030f] has been reserved [ 0.247896] system 00:01: [io 0x0290-0x0297] has been reserved [ 0.247901] system 00:01: [io 0x0880-0x088f] has been reserved [ 0.247908] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.247931] pnp 00:02: [dma 4] [ 0.247935] pnp 00:02: [io 0x0000-0x000f] [ 0.247939] pnp 00:02: [io 0x0080-0x0090] [ 0.247943] pnp 00:02: [io 0x0094-0x009f] [ 0.247947] pnp 00:02: [io 0x00c0-0x00df] [ 0.248033] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) [ 0.248125] pnp 00:03: [io 0x0070-0x0073] [ 0.248187] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.248205] pnp 00:04: [io 0x0061] [ 0.248260] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) [ 0.248277] pnp 00:05: [io 0x00f0-0x00ff] [ 0.248292] pnp 00:05: [irq 13] [ 0.248348] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.248583] pnp 00:06: [io 0x03f0-0x03f5] [ 0.248588] pnp 00:06: [io 0x03f7] [ 0.248597] pnp 00:06: [irq 6] [ 0.248601] pnp 00:06: [dma 2] [ 0.248690] pnp 00:06: Plug and Play ACPI device, IDs PNP0700 (active) [ 0.248998] pnp 00:07: [io 0x03f8-0x03ff] [ 0.249008] pnp 00:07: [irq 4] [ 0.249122] pnp 00:07: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.249479] pnp 00:08: [io 0x0400-0x04bf] [ 0.249584] system 00:08: [io 0x0400-0x04bf] has been reserved [ 0.249591] system 00:08: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.249628] pnp 00:09: [mem 0xffb80000-0xffbfffff] [ 0.249690] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) [ 0.250049] pnp 00:0a: [mem 0xe0000000-0xefffffff] [ 0.250167] system 00:0a: [mem 0xe0000000-0xefffffff] has been reserved [ 0.250173] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.250302] pnp 00:0b: [mem 0x000f0000-0x000fffff] [ 0.250307] pnp 00:0b: [mem 0x7ff00000-0x7fffffff] [ 0.250311] pnp 00:0b: [mem 0xfed00000-0xfed000ff] [ 0.250316] pnp 00:0b: [mem 0x0000046e-0x0000056d] [ 0.250320] pnp 00:0b: [mem 0x7fee0000-0x7fefffff] [ 0.250324] pnp 00:0b: [mem 0x00000000-0x0009ffff] [ 0.250328] pnp 00:0b: [mem 0x00100000-0x7fedffff] [ 0.250332] pnp 00:0b: [mem 0xfec00000-0xfec00fff] [ 0.250336] pnp 00:0b: [mem 0xfed14000-0xfed1dfff] [ 0.250341] pnp 00:0b: [mem 0xfed20000-0xfed9ffff] [ 0.250345] pnp 00:0b: [mem 0xfee00000-0xfee00fff] [ 0.250349] pnp 00:0b: [mem 0xffb00000-0xffb7ffff] [ 0.250353] pnp 00:0b: [mem 0xfff00000-0xffffffff] [ 0.250357] pnp 00:0b: [mem 0x000e0000-0x000effff] [ 0.250409] pnp 00:0b: disabling [mem 0x0000046e-0x0000056d] because it overlaps 0000:01:00.0 BAR 6 [mem 0x00000000-0x0007ffff pref] [ 0.250419] pnp 00:0b: disabling [mem 0x0000046e-0x0000056d disabled] because it overlaps 0000:03:00.0 BAR 6 [mem 0x00000000-0x0000ffff pref] [ 0.250430] pnp 00:0b: disabling [mem 0x0000046e-0x0000056d disabled] because it overlaps 0000:04:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref] [ 0.250524] system 00:0b: [mem 0x000f0000-0x000fffff] could not be reserved [ 0.250530] system 00:0b: [mem 0x7ff00000-0x7fffffff] has been reserved [ 0.250536] system 00:0b: [mem 0xfed00000-0xfed000ff] has been reserved [ 0.250541] system 00:0b: [mem 0x7fee0000-0x7fefffff] could not be reserved [ 0.250547] system 00:0b: [mem 0x00000000-0x0009ffff] could not be reserved [ 0.250552] system 00:0b: [mem 0x00100000-0x7fedffff] could not be reserved [ 0.250558] system 00:0b: [mem 0xfec00000-0xfec00fff] could not be reserved [ 0.250563] system 00:0b: [mem 0xfed14000-0xfed1dfff] has been reserved [ 0.250568] system 00:0b: [mem 0xfed20000-0xfed9ffff] has been reserved [ 0.250574] system 00:0b: [mem 0xfee00000-0xfee00fff] has been reserved [ 0.250579] system 00:0b: [mem 0xffb00000-0xffb7ffff] has been reserved [ 0.250585] system 00:0b: [mem 0xfff00000-0xffffffff] has been reserved [ 0.250590] system 00:0b: [mem 0x000e0000-0x000effff] has been reserved [ 0.250596] system 00:0b: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.250614] pnp: PnP ACPI: found 12 devices [ 0.250617] ACPI: ACPI bus type pnp unregistered [ 0.250624] PnPBIOS: Disabled by ACPI PNP [ 0.288725] PCI: max bus depth: 1 pci_try_num: 2 [ 0.288786] pci 0000:01:00.0: BAR 6: assigned [mem 0xfb000000-0xfb07ffff pref] [ 0.288792] pci 0000:00:01.0: PCI bridge to [bus 01-01] [ 0.288797] pci 0000:00:01.0: bridge window [io 0xa000-0xafff] [ 0.288804] pci 0000:00:01.0: bridge window [mem 0xf8000000-0xfbffffff] [ 0.288811] pci 0000:00:01.0: bridge window [mem 0xd0000000-0xdfffffff 64bit pref] [ 0.288820] pci 0000:00:1c.0: PCI bridge to [bus 02-02] [ 0.288825] pci 0000:00:1c.0: bridge window [io 0x9000-0x9fff] [ 0.288833] pci 0000:00:1c.0: bridge window [mem 0xfdb00000-0xfdbfffff] [ 0.288840] pci 0000:00:1c.0: bridge window [mem 0xfd800000-0xfd8fffff 64bit pref] [ 0.288851] pci 0000:03:00.0: BAR 6: assigned [mem 0xfde00000-0xfde0ffff pref] [ 0.288856] pci 0000:00:1c.4: PCI bridge to [bus 03-03] [ 0.288861] pci 0000:00:1c.4: bridge window [io 0xd000-0xdfff] [ 0.288869] pci 0000:00:1c.4: bridge window [mem 0xfd700000-0xfd7fffff] [ 0.288876] pci 0000:00:1c.4: bridge window [mem 0xfde00000-0xfdefffff 64bit pref] [ 0.288887] pci 0000:04:00.0: BAR 6: assigned [mem 0xfdc00000-0xfdc1ffff pref] [ 0.288891] pci 0000:00:1c.5: PCI bridge to [bus 04-04] [ 0.288897] pci 0000:00:1c.5: bridge window [io 0xb000-0xbfff] [ 0.288904] pci 0000:00:1c.5: bridge window [mem 0xfdd00000-0xfddfffff] [ 0.288911] pci 0000:00:1c.5: bridge window [mem 0xfdc00000-0xfdcfffff 64bit pref] [ 0.288920] pci 0000:00:1e.0: PCI bridge to [bus 05-05] [ 0.288926] pci 0000:00:1e.0: bridge window [io 0xc000-0xcfff] [ 0.288933] pci 0000:00:1e.0: bridge window [mem 0xfda00000-0xfdafffff] [ 0.288940] pci 0000:00:1e.0: bridge window [mem 0xfd900000-0xfd9fffff 64bit pref] [ 0.288971] pci 0000:00:01.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.288979] pci 0000:00:01.0: setting latency timer to 64 [ 0.288991] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.288998] pci 0000:00:1c.0: setting latency timer to 64 [ 0.289008] pci 0000:00:1c.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.289014] pci 0000:00:1c.4: setting latency timer to 64 [ 0.289030] pci 0000:00:1c.5: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.289037] pci 0000:00:1c.5: setting latency timer to 64 [ 0.289047] pci 0000:00:1e.0: setting latency timer to 64 [ 0.289054] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] [ 0.289058] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] [ 0.289063] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] [ 0.289067] pci_bus 0000:00: resource 7 [mem 0x000c0000-0x000dffff] [ 0.289072] pci_bus 0000:00: resource 8 [mem 0x7ff00000-0xfebfffff] [ 0.289077] pci_bus 0000:01: resource 0 [io 0xa000-0xafff] [ 0.289081] pci_bus 0000:01: resource 1 [mem 0xf8000000-0xfbffffff] [ 0.289086] pci_bus 0000:01: resource 2 [mem 0xd0000000-0xdfffffff 64bit pref] [ 0.289092] pci_bus 0000:02: resource 0 [io 0x9000-0x9fff] [ 0.289096] pci_bus 0000:02: resource 1 [mem 0xfdb00000-0xfdbfffff] [ 0.289101] pci_bus 0000:02: resource 2 [mem 0xfd800000-0xfd8fffff 64bit pref] [ 0.289106] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff] [ 0.289110] pci_bus 0000:03: resource 1 [mem 0xfd700000-0xfd7fffff] [ 0.289115] pci_bus 0000:03: resource 2 [mem 0xfde00000-0xfdefffff 64bit pref] [ 0.289120] pci_bus 0000:04: resource 0 [io 0xb000-0xbfff] [ 0.289124] pci_bus 0000:04: resource 1 [mem 0xfdd00000-0xfddfffff] [ 0.289129] pci_bus 0000:04: resource 2 [mem 0xfdc00000-0xfdcfffff 64bit pref] [ 0.289134] pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] [ 0.289138] pci_bus 0000:05: resource 1 [mem 0xfda00000-0xfdafffff] [ 0.289143] pci_bus 0000:05: resource 2 [mem 0xfd900000-0xfd9fffff 64bit pref] [ 0.289148] pci_bus 0000:05: resource 4 [io 0x0000-0x0cf7] [ 0.289152] pci_bus 0000:05: resource 5 [io 0x0d00-0xffff] [ 0.289157] pci_bus 0000:05: resource 6 [mem 0x000a0000-0x000bffff] [ 0.289161] pci_bus 0000:05: resource 7 [mem 0x000c0000-0x000dffff] [ 0.289166] pci_bus 0000:05: resource 8 [mem 0x7ff00000-0xfebfffff] [ 0.289233] NET: Registered protocol family 2 [ 0.289360] IP route cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.289754] TCP established hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.290351] TCP bind hash table entries: 65536 (order: 7, 524288 bytes) [ 0.290670] TCP: Hash tables configured (established 131072 bind 65536) [ 0.290674] TCP reno registered [ 0.290680] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.290703] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.290868] NET: Registered protocol family 1 [ 0.290911] pci 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.290932] pci 0000:00:1a.0: PCI INT A disabled [ 0.290956] pci 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.290975] pci 0000:00:1a.1: PCI INT B disabled [ 0.290992] pci 0000:00:1a.2: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [ 0.291012] pci 0000:00:1a.2: PCI INT D disabled [ 0.291031] pci 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.291068] pci 0000:00:1a.7: PCI INT C disabled [ 0.291104] pci 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.291123] pci 0000:00:1d.0: PCI INT A disabled [ 0.291135] pci 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.291155] pci 0000:00:1d.1: PCI INT B disabled [ 0.291166] pci 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.291185] pci 0000:00:1d.2: PCI INT C disabled [ 0.291198] pci 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.291219] pci 0000:00:1d.7: PCI INT A disabled [ 0.291258] pci 0000:01:00.0: Boot video device [ 0.291273] PCI: CLS 4 bytes, default 64 [ 0.291857] audit: initializing netlink socket (disabled) [ 0.291876] type=2000 audit(1336753420.284:1): initialized [ 0.337724] highmem bounce pool size: 64 pages [ 0.337734] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.349241] VFS: Disk quotas dquot_6.5.2 [ 0.349365] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 0.350418] fuse init (API version 7.17) [ 0.350611] msgmni has been set to 1685 [ 0.351179] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.351229] io scheduler noop registered [ 0.351233] io scheduler deadline registered [ 0.351247] io scheduler cfq registered (default) [ 0.351450] pcieport 0000:00:01.0: setting latency timer to 64 [ 0.351502] pcieport 0000:00:01.0: irq 40 for MSI/MSI-X [ 0.351585] pcieport 0000:00:1c.0: setting latency timer to 64 [ 0.351639] pcieport 0000:00:1c.0: irq 41 for MSI/MSI-X [ 0.351728] pcieport 0000:00:1c.4: setting latency timer to 64 [ 0.351779] pcieport 0000:00:1c.4: irq 42 for MSI/MSI-X [ 0.351875] pcieport 0000:00:1c.5: setting latency timer to 64 [ 0.351927] pcieport 0000:00:1c.5: irq 43 for MSI/MSI-X [ 0.352094] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.352143] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.352311] intel_idle: MWAIT substates: 0x22220 [ 0.352315] intel_idle: does not run on family 6 model 23 [ 0.352446] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 [ 0.352455] ACPI: Power Button [PWRB] [ 0.352556] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 [ 0.352562] ACPI: Power Button [PWRF] [ 0.352650] ACPI: Fan [FAN] (on) [ 0.355667] thermal LNXTHERM:00: registered as thermal_zone0 [ 0.355673] ACPI: Thermal Zone [THRM] (26 C) [ 0.355750] ERST: Table is not found! [ 0.355753] GHES: HEST is not enabled! [ 0.355898] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 0.376332] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.376582] isapnp: Scanning for PnP cards... [ 0.709133] Freeing initrd memory: 13792k freed [ 0.729743] isapnp: No Plug & Play device found [ 0.816786] 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.832385] Linux agpgart interface v0.103 [ 0.835605] brd: module loaded [ 0.837138] loop: module loaded [ 0.837452] ata_piix 0000:00:1f.2: version 2.13 [ 0.837473] ata_piix 0000:00:1f.2: PCI INT A -> GSI 19 (level, low) -> IRQ 19 [ 0.837480] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.837546] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.838099] scsi0 : ata_piix [ 0.838253] scsi1 : ata_piix [ 0.839183] ata1: SATA max UDMA/133 cmd 0xf900 ctl 0xf800 bmdma 0xf500 irq 19 [ 0.839192] ata2: SATA max UDMA/133 cmd 0xf700 ctl 0xf600 bmdma 0xf508 irq 19 [ 0.839239] ata_piix 0000:00:1f.5: PCI INT A -> GSI 19 (level, low) -> IRQ 19 [ 0.839246] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.839300] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.839708] scsi2 : ata_piix [ 0.839841] scsi3 : ata_piix [ 0.840301] ata3: SATA max UDMA/133 cmd 0xf200 ctl 0xf100 bmdma 0xee00 irq 19 [ 0.840308] ata4: SATA max UDMA/133 cmd 0xf000 ctl 0xef00 bmdma 0xee08 irq 19 [ 0.840429] pata_acpi 0000:03:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.840467] pata_acpi 0000:03:00.0: setting latency timer to 64 [ 0.840488] pata_acpi 0000:03:00.0: PCI INT A disabled [ 0.841159] Fixed MDIO Bus: probed [ 0.841205] tun: Universal TUN/TAP device driver, 1.6 [ 0.841210] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 0.841322] PPP generic driver version 2.4.2 [ 0.841515] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.841542] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.841567] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [ 0.841573] ehci_hcd 0000:00:1a.7: EHCI Host Controller [ 0.841658] ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 0.845582] ehci_hcd 0000:00:1a.7: cache line size of 4 is not supported [ 0.845610] ehci_hcd 0000:00:1a.7: irq 18, io mem 0xfdfff000 [ 0.860022] ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 0.860264] hub 1-0:1.0: USB hub found [ 0.860272] hub 1-0:1.0: 6 ports detected [ 0.860404] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.860424] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 0.860430] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 0.860512] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 0.864413] ehci_hcd 0000:00:1d.7: cache line size of 4 is not supported [ 0.864438] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfdffe000 [ 0.880021] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 0.880227] hub 2-0:1.0: USB hub found [ 0.880234] hub 2-0:1.0: 6 ports detected [ 0.880369] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.880396] uhci_hcd: USB Universal Host Controller Interface driver [ 0.880431] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.880443] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 0.880449] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 0.880529] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 0.880574] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000ff00 [ 0.880803] hub 3-0:1.0: USB hub found [ 0.880811] hub 3-0:1.0: 2 ports detected [ 0.880929] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.880940] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 0.880946] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 0.881039] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 0.881081] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000fe00 [ 0.881302] hub 4-0:1.0: USB hub found [ 0.881310] hub 4-0:1.0: 2 ports detected [ 0.881427] uhci_hcd 0000:00:1a.2: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [ 0.881438] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 0.881443] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 0.881523] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 0.881551] uhci_hcd 0000:00:1a.2: irq 19, io base 0x0000fd00 [ 0.881774] hub 5-0:1.0: USB hub found [ 0.881781] hub 5-0:1.0: 2 ports detected [ 0.881899] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.881910] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.881915] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.881993] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 0.882021] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fc00 [ 0.882244] hub 6-0:1.0: USB hub found [ 0.882252] hub 6-0:1.0: 2 ports detected [ 0.882370] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.882381] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.882386] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.882467] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 0.882495] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fb00 [ 0.882735] hub 7-0:1.0: USB hub found [ 0.882742] hub 7-0:1.0: 2 ports detected [ 0.882858] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.882869] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.882875] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.882954] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 0.882982] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000fa00 [ 0.883205] hub 8-0:1.0: USB hub found [ 0.883213] hub 8-0:1.0: 2 ports detected [ 0.883435] usbcore: registered new interface driver libusual [ 0.883535] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.883926] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.883936] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.884187] mousedev: PS/2 mouse device common for all mice [ 0.884433] rtc_cmos 00:03: RTC can wake from S4 [ 0.884582] rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0 [ 0.884612] rtc0: alarms up to one month, 242 bytes nvram, hpet irqs [ 0.884719] device-mapper: uevent: version 1.0.3 [ 0.884854] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: [email protected] [ 0.884917] EISA: Probing bus 0 at eisa.0 [ 0.884921] EISA: Cannot allocate resource for mainboard [ 0.884925] Cannot allocate resource for EISA slot 1 [ 0.884929] Cannot allocate resource for EISA slot 2 [ 0.884932] Cannot allocate resource for EISA slot 3 [ 0.884936] Cannot allocate resource for EISA slot 4 [ 0.884940] Cannot allocate resource for EISA slot 5 [ 0.884943] Cannot allocate resource for EISA slot 6 [ 0.884947] Cannot allocate resource for EISA slot 7 [ 0.884950] Cannot allocate resource for EISA slot 8 [ 0.884954] EISA: Detected 0 cards. [ 0.884969] cpufreq-nforce2: No nForce2 chipset. [ 0.884973] cpuidle: using governor ladder [ 0.884976] cpuidle: using governor menu [ 0.884980] EFI Variables Facility v0.08 2004-May-17 [ 0.885476] TCP cubic registered [ 0.885708] NET: Registered protocol family 10 [ 0.886771] NET: Registered protocol family 17 [ 0.886799] Registering the dns_resolver key type [ 0.886837] Using IPI No-Shortcut mode [ 0.887028] PM: Hibernation image not present or could not be loaded. [ 0.887047] registered taskstats version 1 [ 0.902579] Magic number: 12:339:388 [ 0.902592] usb usb6: hash matches [ 0.902687] rtc_cmos 00:03: setting system clock to 2012-05-11 16:23:41 UTC (1336753421) [ 0.903185] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.903189] EDD information not available. [ 1.170710] ata3: SATA link down (SStatus 0 SControl 300) [ 1.181439] ata4: SATA link down (SStatus 0 SControl 300) [ 1.288020] Refined TSC clocksource calibration: 2499.999 MHz. [ 1.288028] Switching to clocksource tsc [ 1.292016] usb 1-5: new high-speed USB device number 3 using ehci_hcd [ 1.486745] ata2.00: SATA link down (SStatus 0 SControl 300) [ 1.486762] ata2.01: SATA link down (SStatus 0 SControl 300) [ 1.640115] ata1.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 1.640130] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.648342] ata1.00: ATA-7: Maxtor 7Y250M0, YAR511W0, max UDMA/133 [ 1.648348] ata1.00: 490234752 sectors, multi 0: LBA48 [ 1.664325] ata1.00: configured for UDMA/133 [ 1.664531] scsi 0:0:0:0: Direct-Access ATA Maxtor 7Y250M0 YAR5 PQ: 0 ANSI: 5 [ 1.664745] sd 0:0:0:0: [sda] 490234752 512-byte logical blocks: (251 GB/233 GiB) [ 1.664809] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.664838] sd 0:0:0:0: [sda] Write Protect is off [ 1.664843] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.664884] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.691699] sda: sda1 sda2 sda3 sda4 [ 1.692348] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.692461] Freeing unused kernel memory: 740k freed [ 1.692820] Write protecting the kernel text: 5828k [ 1.692851] Write protecting the kernel read-only data: 2376k [ 1.692854] NX-protecting the kernel data: 4412k [ 1.723980] udevd[92]: starting version 175 [ 1.865339] Floppy drive(s): fd0 is 1.44M [ 1.865429] pata_jmicron 0000:03:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1.865478] pata_jmicron 0000:03:00.0: setting latency timer to 64 [ 1.867875] sky2: driver version 1.30 [ 1.867926] sky2 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 1.867942] sky2 0000:04:00.0: setting latency timer to 64 [ 1.867979] sky2 0000:04:00.0: Yukon-2 EC chip revision 2 [ 1.868111] sky2 0000:04:00.0: irq 44 for MSI/MSI-X [ 1.868174] scsi4 : pata_jmicron [ 1.869802] sky2 0000:04:00.0: eth0: addr 00:01:29:a4:16:0a [ 1.869828] scsi5 : pata_jmicron [ 1.869943] ata5: PATA max UDMA/100 cmd 0xdf00 ctl 0xde00 bmdma 0xdb00 irq 16 [ 1.869949] ata6: PATA max UDMA/100 cmd 0xdd00 ctl 0xdc00 bmdma 0xdb08 irq 16 [ 1.880053] usb 4-1: new full-speed USB device number 2 using uhci_hcd [ 1.884052] FDC 0 is a post-1991 82077 [ 2.032611] ata5.00: ATAPI: _NEC DVD+/-RW ND-3450A, 103C, max UDMA/33 [ 2.048585] ata5.00: configured for UDMA/33 [ 2.049777] scsi 4:0:0:0: CD-ROM _NEC DVD+-RW ND-3450A 103C PQ: 0 ANSI: 5 [ 2.051048] sr0: scsi3-mmc drive: 48x/48x writer cd/rw xa/form2 cdda tray [ 2.051054] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 2.051283] sr 4:0:0:0: Attached scsi CD-ROM sr0 [ 2.051483] sr 4:0:0:0: Attached scsi generic sg1 type 5 [ 2.079838] usbcore: registered new interface driver usbhid [ 2.079844] usbhid: USB HID core driver [ 2.236660] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 12.150230] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 12.177342] udevd[333]: starting version 175 [ 12.195524] Adding 417684k swap on /dev/sda2. Priority:-1 extents:1 across:417684k [ 12.278032] lp: driver loaded but no devices found [ 12.516456] logitech-djreceiver 0003:046D:C52B.0003: hiddev0,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:1a.1-1/input2 [ 12.520297] input: Logitech Unifying Device. Wireless PID:1024 as /devices/pci0000:00/0000:00:1a.1/usb4/4-1/4-1:1.2/0003:046D:C52B.0003/input/input2 [ 12.520753] logitech-djdevice 0003:046D:C52B.0004: input,hidraw1: USB HID v1.11 Mouse [Logitech Unifying Device. Wireless PID:1024] on usb-0000:00:1a.1-1:1 [ 12.523286] input: Logitech Unifying Device. Wireless PID:2011 as /devices/pci0000:00/0000:00:1a.1/usb4/4-1/4-1:1.2/0003:046D:C52B.0003/input/input3 [ 12.524439] logitech-djdevice 0003:046D:C52B.0005: input,hidraw2: USB HID v1.11 Keyboard [Logitech Unifying Device. Wireless PID:2011] on usb-0000:00:1a.1-1:2 [ 12.545746] type=1400 audit(1336771433.137:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=502 comm="apparmor_parser" [ 12.546574] type=1400 audit(1336771433.137:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=502 comm="apparmor_parser" [ 12.547034] type=1400 audit(1336771433.137:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=502 comm="apparmor_parser" [ 12.626869] Linux video capture interface: v2.00 [ 12.649104] uvcvideo: Found UVC 1.00 device <unnamed> (046d:081a) [ 12.668665] input: UVC Camera (046d:081a) as /devices/pci0000:00/0000:00:1a.7/usb1/1-5/1-5:1.0/input/input4 [ 12.668909] usbcore: registered new interface driver uvcvideo [ 12.668914] USB Video Class driver (1.1.1) [ 12.697645] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 [ 12.697721] snd_hda_intel 0000:00:1b.0: irq 45 for MSI/MSI-X [ 12.697760] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 12.706772] nvidia: module license 'NVIDIA' taints kernel. [ 12.706778] Disabling lock debugging due to kernel taint [ 12.735428] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 13.350252] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 13.350267] nvidia 0000:01:00.0: setting latency timer to 64 [ 13.350275] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=io+mem [ 13.351464] NVRM: loading NVIDIA UNIX x86 Kernel Module 295.40 Thu Apr 5 21:28:09 PDT 2012 [ 13.356785] hda_codec: ALC889A: BIOS auto-probing. [ 13.357267] init: failsafe main process (658) killed by TERM signal [ 13.372756] input: HDA Intel Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input5 [ 13.373173] input: HDA Intel Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6 [ 13.373568] input: HDA Intel Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 13.373954] input: HDA Intel Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 13.374339] input: HDA Intel Line-Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 13.374715] input: HDA Intel Line-Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 13.375109] input: HDA Intel Line-Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11 [ 13.375724] input: HDA Intel Line-Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 13.475252] type=1400 audit(1336771434.065:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=735 comm="apparmor_parser" [ 13.477026] type=1400 audit(1336771434.069:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=735 comm="apparmor_parser" [ 13.477695] type=1400 audit(1336771434.069:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=735 comm="apparmor_parser" [ 13.479048] type=1400 audit(1336771434.069:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=734 comm="apparmor_parser" [ 13.488994] type=1400 audit(1336771434.081:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=738 comm="apparmor_parser" [ 13.489972] type=1400 audit(1336771434.081:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=738 comm="apparmor_parser" [ 13.

    Read the article

  • Welcome to www.badapi.net, a REST API with badly-behaved endpoints

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/08/14/welcome-to-www.badapi.net-a-rest-api-with-badly-behaved-endpoints.aspxI've had a need in a few projects for a REST API that doesn't behave well - takes a long time to respond, or never responds, returns unexpected status codes etc.That can be very useful for testing that clients cope gracefully with unexpected responses.Till now I've always coded a stub API in the project and run it locally, but I've put a few 'misbehaved' endpoints together and published them at www.badapi.net, and the source is on GitHub here: sixeyed/badapi.net.You can browse to the home page and see the available endpoints. I'll be adding more as I think of them, and I may give the styling of the help pages a bit more thought...As of today's release, the misbehaving endpoints available to you are:GET longrunning?between={between}&and={and} - Waits for a (short) random period before returningGET verylongrunning?between={between}&and={and} -Waits for a (long) random period before returningGET internalservererror    - Returns 500: Internal Server ErrorGET badrequest - Returns 400: BadRequestGET notfound - Returns 404: Not FoundGET unauthorized - Returns 401: UnauthorizedGET forbidden - Returns 403: ForbiddenGET conflict -Returns 409: ConflictGET status/{code}?reason={reason} - Returns the provided status code Go bad.

    Read the article

  • Can't Install php5-msql

    - by user210445
    Hello friends I'm finishing the process of installing Apache/Php/mysql installations but this shows up: # sudo apt-get install mysql-server php5-msql Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php5-msql After some adjustments this happened: angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Technique to Display Multiple Browser Windows Tiled at Same time?

    - by Kendor
    I have a separate monitor that I use for Toodledo (a web-based task managment app), in which I like to display various views (Next Action Status, Waiting Status, Planning Status, and Overdue Due-Date items). I've been playing around with some add-ons on Firefox that allow you to split the browser, but they are cumbersome. I'm now trying Chrome, and opening 4 different browser windows that I've tiled on the screen in quadrants (I use the Compiz grid applet for this). This is not ideal as each browser replicates the URL bar and the tab, and I don't have opening ths windows automated upon restart. Chrome is great in managing screen real estate, but this is not ideal. In Firefox I tried various extension to hide interface elements, but it was very clunky... Am wondering whether anyyone has tried to do similar with TD, and how they achieved what I'm going after? Am wondering whether someone has a good technique for accomplishing what I'm looking for?

    Read the article

  • google webmaster showing 6 pages submitted 0 indexed, yet i can see them all there when i search in google?

    - by sam
    I have a small 'brochure' type site with 6 pages, i can see them all the pages showing up in google when i search for my site. But in webmaster tools under the sitemaps section it says 6 submitted, (the blue bar of the graph), but the indexed pages - the red bar is showing 0 indexed pages, even though they seem to be indexed ? any idea why this is ? I dont really think its that important as the pages are still indexed, but it just seems odd. =================================================== UPDATE 9/3/12 having just looked in google webmaster its showing that there are 11 pages indexed, under the health index status tab.. but under the optimization sitemap tab it shows 6 urls submitted but only 1 indexed ? please see images bellow index status: Sitemap status:

    Read the article

  • Google Webmaster Tools Index dropped to Zero [closed]

    - by Brian Anderson
    Earlier this year I rebuilt my website using ZenCart. Immediately I saw a drop in index status from 59 to 0. I then signed up for Google Webmaster Tools and noticed the Index status took a dramatic drop and has never recovered. I have worked to add content and I know I am not done, but have not seen any recovery of this index since. What confuses me is when I look at the sitemap status under Optimization it shows me there are 1239 submitted and 1127 pages indexed. Most of my pages have fallen off page one for relevant search terms and some are as far back as page 7 or 8 where they used to be on the first page. I have made some changes in the past week to robots.txt and sitemap.xml, but have not seen any improvements. Can anyone tell me what might be going on here? My website is andersonpens.net. Thanks! Brian

    Read the article

  • Different fan behaviour in my laptop after upgrade, what to do now?

    - by student
    After upgrading from lubuntu 13.10 to 14.04 the fan of my laptop seems to run much more often than in 13.10. When it runs, it doesn't run continously but starts and stops every second. fwts fan results in Results generated by fwts: Version V14.03.01 (2014-03-27 02:14:17). Some of this work - Copyright (c) 1999 - 2014, Intel Corp. All rights reserved. Some of this work - Copyright (c) 2010 - 2014, Canonical. This test run on 12/05/14 at 21:40:13 on host Linux einstein 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64. Command: "fwts fan". Running tests: fan. fan: Simple fan tests. -------------------------------------------------------------------------------- Test 1 of 2: Test fan status. Test how many fans there are in the system. Check for the current status of the fan(s). PASSED: Test 1, Fan cooling_device0 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device1 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device2 of type LCD has max cooling state 15 and current cooling state 10. Test 2 of 2: Load system, check CPU fan status. Test how many fans there are in the system. Check for the current status of the fan(s). Loading CPUs for 20 seconds to try and get fan speeds to change. Fan cooling_device0 current state did not change from value 0 while CPUs were busy. Fan cooling_device1 current state did not change from value 0 while CPUs were busy. ADVICE: Did not detect any change in the CPU related thermal cooling device states. It could be that the devices are returning static information back to the driver and/or the fan speed is automatically being controlled by firmware using System Management Mode in which case the kernel interfaces being examined may not work anyway. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. Test Failure Summary ================================================================================ Critical failures: NONE High failures: NONE Medium failures: NONE Low failures: NONE Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ fan | 3| | | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 0| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Here is the output of lsmod lsmod Module Size Used by i8k 14421 0 zram 18478 2 dm_crypt 23177 0 gpio_ich 13476 0 dell_wmi 12761 0 sparse_keymap 13948 1 dell_wmi snd_hda_codec_hdmi 46207 1 snd_hda_codec_idt 54645 1 rfcomm 69160 0 arc4 12608 2 dell_laptop 18168 0 bnep 19624 2 dcdbas 14928 1 dell_laptop bluetooth 395423 10 bnep,rfcomm iwldvm 232285 0 mac80211 626511 1 iwldvm snd_hda_intel 52355 3 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30144 1 snd_seq_midi coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi joydev 17381 0 serio_raw 13462 0 iwlwifi 169932 1 iwldvm pcmcia 62299 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29482 2 snd_pcm,snd_seq lpc_ich 21080 0 cfg80211 484040 3 iwlwifi,mac80211,iwldvm yenta_socket 41027 0 pcmcia_rsrc 18407 1 yenta_socket pcmcia_core 23592 3 pcmcia,pcmcia_rsrc,yenta_socket binfmt_misc 17468 1 snd 69238 17 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_idt,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi soundcore 12680 1 snd parport_pc 32701 0 mac_hid 13205 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc firewire_ohci 40409 0 psmouse 102222 0 sdhci_pci 23172 0 sdhci 43015 1 sdhci_pci firewire_core 68769 1 firewire_ohci crc_itu_t 12707 1 firewire_core ahci 25819 2 libahci 32168 1 ahci i915 783485 2 wmi 19177 1 dell_wmi i2c_algo_bit 13413 1 i915 drm_kms_helper 52758 1 i915 e1000e 254433 0 drm 302817 3 i915,drm_kms_helper ptp 18933 1 e1000e pps_core 19382 1 ptp video 19476 1 i915 I tried one answer to the similar question: loud fan on Ubuntu 14.04 and created a /etc/i8kmon.conf like the following: # Run as daemon, override with --daemon option set config(daemon) 1 # Automatic fan control, override with --auto option set config(auto) 1 # Status check timeout (seconds), override with --timeout option set config(timeout) 2 # Report status on stdout, override with --verbose option set config(verbose) 1 # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt} set config(0) {{0 0} -1 55 -1 55} set config(1) {{0 1} 50 60 55 65} set config(2) {{1 1} 55 80 60 85} set config(3) {{2 2} 70 128 75 128} With this setup the fan goes on even if the temperature is below 50 degree celsius (I don't see a pattern). However I get the impression that the CPU got's hotter in average than without this file. What changes from 13.10 to 14.04 may be responsible for this? If this is a bug, for which package I should report the bug?

    Read the article

  • Problems with DNS propagation 10 days after a change was made

    - by runlevel6
    The engineering team I work with has been in the process of moving equipment from one datacenter to another. Ten days ago we moved one of our name servers authoritative for our client's domains (ns1.faithhiway.com) and updated its IP address with its respective DNS provider (register.com) to point to the new datacenter. All tests done show that this name server is correctly running at its new location and when queried, returning the correct response for any domains it is responsible for. The problem is that well after 72 hours had gone by we were still seeing more DNS activity at its old IP address than at the new. The good news is that we kept a name server responding on the old IP address for the time being so we are not seeing any issues with the domains our nameserver is responsible for but the goal is to retire that as soon as possible. As you can see from WhatsMyDNS.net, a decent amount of propagation has occurred over the last 10 days since we made this change, but still there are some locations reporting our original IP. Considering that the TTL is only 3600 with the name servers responsible for this domain, it does not make any sense to myself or the other engineers working with me that we are having this issue. Now if I run a DNS check using one of the Register.com DNS servers (direct nameservers for faithhiway.com), I get the following (correct) result: # dig @dns01.gpn.register.com ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @dns01.gpn.register.com. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43232 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 5 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. ;; ADDITIONAL SECTION: dns01.gpn.register.com. 3600 IN A 98.124.192.1 dns02.gpn.register.com. 3600 IN A 98.124.197.1 dns03.gpn.register.com. 3600 IN A 98.124.193.1 dns04.gpn.register.com. 3600 IN A 69.64.145.225 dns05.gpn.register.com. 3600 IN A 98.124.196.1 ;; Query time: 50 msec ;; SERVER: 98.124.192.1#53(98.124.192.1) ;; WHEN: Thu Jan 27 15:16:57 2011 ;; MSG SIZE rcvd: 269 Just as a reference, here are the results when the same query is checked against a variety of Public DNS servers: Google: # dig @8.8.8.8 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @8.8.8.8. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12773 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 997 IN A 206.127.2.71 ;; Query time: 29 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Thu Jan 27 15:17:31 2011 ;; MSG SIZE rcvd: 52 Level 3: # dig @4.2.2.1 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @4.2.2.1. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46505 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 2623 IN A 206.127.2.71 ;; Query time: 7 msec ;; SERVER: 4.2.2.1#53(4.2.2.1) ;; WHEN: Thu Jan 27 15:18:35 2011 ;; MSG SIZE rcvd: 52 Verizon: # dig @151.197.0.38 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @151.197.0.38. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32658 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; Query time: 81 msec ;; SERVER: 151.197.0.38#53(151.197.0.38) ;; WHEN: Thu Jan 27 15:19:15 2011 ;; MSG SIZE rcvd: 52 Cisco: # dig @64.102.255.44 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @64.102.255.44. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39689 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. ;; Query time: 105 msec ;; SERVER: 64.102.255.44#53(64.102.255.44) ;; WHEN: Thu Jan 27 15:20:05 2011 ;; MSG SIZE rcvd: 165 OpenDNS: # dig @208.67.222.222 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @208.67.222.222. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12328 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169507 IN A 207.200.19.162 ;; Query time: 6 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Thu Jan 27 15:19:29 2011 ;; MSG SIZE rcvd: 52 SpeakEasy: # dig @66.93.87.2 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @66.93.87.2. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9342 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169323 IN A 207.200.19.162 ;; Query time: 69 msec ;; SERVER: 66.93.87.2#53(66.93.87.2) ;; WHEN: Thu Jan 27 15:19:51 2011 ;; MSG SIZE rcvd: 52 As you can see above, the majority of queries are returning the correct result. But a few (OpenDNS and SpeakEasy in the examples above) are still showing the old IP address. Considering the length of time that has gone by, it seems obvious to me that either we have made a mistake and not thoroughly handled the DNS changes on our end (likely) or there is a problem with either the DNS provider for this domain (Register) or with some of the DNS servers out in the wild (rather unlikely). Any advice on how I can proceed with this? UPDATE (January 31, 2011): First of all, I apologize for the length of both the original question and this update. I contemplated removing some of the excess from the original post but just in case this problem and its solution are helpful to someone else in the future I'm just going to leave everything as it is. Anyway, I've been doing some more research into this problem, and have discovered the following interesting occurrence. While running a check on the glue records for faithhiway.com always resolve correctly, if I go and check a client domain (where ns1.faithhiway.com is authoritative), I get a strange response. It looks like the root servers are returning nsX.faithhiway.com as their old IP addresses still (under Additional Section). Because we have a server still there responding to DNS queries, the trace finishes and returns the correct IP addresses as the final step (again, under Additional Section). The example below uses one of the domains that we use that uses ns1.faithhiway.com as its authoritative DNS server. # dig +trace +nosearch +all +norecurse ignitemail.com ; <<>> DiG 9.2.4 <<>> +trace +nosearch +all +norecurse ignitemail.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46856 ;; flags: qr ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;. IN NS ;; ANSWER SECTION: . 7986 IN NS a.root-servers.net. . 7986 IN NS b.root-servers.net. . 7986 IN NS c.root-servers.net. . 7986 IN NS d.root-servers.net. . 7986 IN NS e.root-servers.net. . 7986 IN NS f.root-servers.net. . 7986 IN NS g.root-servers.net. . 7986 IN NS h.root-servers.net. . 7986 IN NS i.root-servers.net. . 7986 IN NS j.root-servers.net. . 7986 IN NS k.root-servers.net. . 7986 IN NS l.root-servers.net. . 7986 IN NS m.root-servers.net. ;; Query time: 39 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 228 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16325 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. com. 172800 IN NS j.gtld-servers.net. ;; ADDITIONAL SECTION: a.gtld-servers.net. 172800 IN A 192.5.6.30 a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 b.gtld-servers.net. 172800 IN A 192.33.14.30 b.gtld-servers.net. 172800 IN AAAA 2001:503:231d::2:30 c.gtld-servers.net. 172800 IN A 192.26.92.30 d.gtld-servers.net. 172800 IN A 192.31.80.30 e.gtld-servers.net. 172800 IN A 192.12.94.30 f.gtld-servers.net. 172800 IN A 192.35.51.30 g.gtld-servers.net. 172800 IN A 192.42.93.30 h.gtld-servers.net. 172800 IN A 192.54.112.30 i.gtld-servers.net. 172800 IN A 192.43.172.30 j.gtld-servers.net. 172800 IN A 192.48.79.30 k.gtld-servers.net. 172800 IN A 192.52.178.30 l.gtld-servers.net. 172800 IN A 192.41.162.30 ;; Query time: 64 msec ;; SERVER: 198.41.0.4#53(a.root-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 504 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12860 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: ignitemail.com. 172800 IN NS ns1.faithhiway.com. ignitemail.com. 172800 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 172800 IN A 207.200.19.162 ns2.faithhiway.com. 172800 IN A 207.200.50.142 ;; Query time: 152 msec ;; SERVER: 192.54.112.30#53(h.gtld-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 111 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43016 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; ANSWER SECTION: ignitemail.com. 3600 IN A 206.127.2.64 ;; AUTHORITY SECTION: ignitemail.com. 3600 IN NS ns1.faithhiway.com. ignitemail.com. 3600 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 3600 IN A 206.127.2.71 ns2.faithhiway.com. 3600 IN A 206.127.2.72 ;; Query time: 25 msec ;; SERVER: 206.127.2.71#53(ns1.faithhiway.com) ;; WHEN: Mon Jan 31 09:22:18 2011 ;; MSG SIZE rcvd: 127 I really think this is a problem we have somewhere in our setup, but whether it is ignorance of something with DNS on my or my fellow engineer's end or just a dumb mistake we made, I have yet to find it.

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Can't connect to local IP address on OSX

    - by Alex Worden
    I'm trying to connect to a webserver that's running on my mac OSX 1.6. I'm able to connect to it locally using http://127.0.0.1:8888/myapp but when I attempt to connect to it using my machine's local IP address (http://192.168.1.15:8888/myapp IP shown below) from the same machine (or another on the network) I cannot connect. I can ping the LAN IP address. I've tried adding IP forwarding to my router for port 8888 but it didn't help. I've checked and the OSX firewall is disabled Can anyone suggest what else is blocking the connection? Here's what I get when I run ifconfig: ~ :ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:e8:16:4d media: autoselect status: inactive supported media: autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control> none en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet6 fe80::21e:c2ff:febf:4809%en1 prefixlen 64 scopeid 0x5 inet 192.168.1.15 netmask 0xffffff00 broadcast 192.168.1.255 ether 00:1e:c2:bf:48:09 media: autoselect status: active supported media: autoselect fw0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 4078 lladdr 00:1f:5b:ff:fe:2b:b3:3c media: autoselect <full-duplex> status: inactive supported media: autoselect <full-duplex> en5: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500 ether 00:1e:c2:8e:0f:45 media: autoselect status: inactive supported media: none autoselect 10baseT/UTP <half-duplex> en2: flags=8922<BROADCAST,SMART,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:1c:42:00:00:00 media: autoselect status: inactive supported media: autoselect en3: flags=8922<BROADCAST,SMART,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:1c:42:00:00:01 media: autoselect status: inactive supported media: autoselect

    Read the article

  • Best practice for Python & Django constants

    - by Dylan Klomparens
    I have a Django model that relies on a tuple. I'm wondering what the best practice is for refering to constants within that tuple for my Django program. Here, for example, I'd like to specify "default=0" as something that is more readable and does not require commenting. Any suggestions? Status = ( (-1, 'Cancelled'), (0, 'Requires attention'), (1, 'Work in progress'), (2, 'Complete'), ) class Task(models.Model): status = models.IntegerField(choices=Status, default=0) # Status is 'Requires attention' (0) by default. EDIT: If possible I'd like to avoid using a number altogether. Somehow using the string 'Requires attention' instead would be more readable.

    Read the article

  • Why is my WPF splash screen progress bar out of sync with the execution of the startup steps?

    - by denny_ch
    Hello, I've implemented a simple WPF splash screen window which informs the user about the progress of the application startup. The startup steps are defined this way: var bootSequence = new[] { new {Do = (Action) InitLogging, Message = "Init logging..."}, new {Do = (Action) InitNHibernate, Message = "Init NHibernate..."}, new {Do = (Action) SetupUnityContainer, Message = "Init Unity..."}, new {Do = (Action) UserLogOn, Message = "Logon..."}, new {Do = (Action) PrefetchData, Message = "Caching..."}, }; InitLogging etc. are methods defined elsewhere, which performs some time consuming tasks. The boot sequence gets executed this way: foreach (var step in bootSequence) { _status.Update(step.Message); step.Do(); } _status denotes an instance of my XAML splash screen window containing a progress bar and a label for status information. Its Update() method is defined as follows: public void Update(string status) { int value = ++_updateSteps; Update(status, value); } private void Update(string status, int value) { var dispatcherOperation = Dispatcher.BeginInvoke( DispatcherPriority.Background, (ThreadStart) delegate { lblStatus.Content = status; progressBar.Value = value; }); dispatcherOperation.Wait(); } In the main this works, the steps get executed and the splash screen shows the progress. But I observed that the splash screen for some reasons doesn't update its content for all steps. This is the reason I called the Dispatcher async and wait for its completion. But even this didn't help. Has anyone else experienced this or some similar behaviour and has some advice how to keep the splash screen's update in sync with the execution of the boot sequence steps? I know that the users will unlikely notice this behaviour, since the splash screen is doing something and the application starts after booting is completed. But myself isn't sleeping well, because I don't know why it is not working as expected... Thx for your help, Denny

    Read the article

  • EclEmma JAVA Code coverage - Unable to coverage service layer of RESTful Webservice

    - by Radhika
    I am using EMMA eclipse plugin to generate code coverage reports. My application is a RESTFul webservice. Junits are written such that a client is created for the webservice and invoked with various inputs. However EMMA shows 0% coverage for the source folder. The test folder alone is covered. The application server(jetty server) is started using a main method. Report: Element Coverage Covered Instructions Total Instructions MyRestFulService 13.6% 900 11846 src 0.5% 49 10412 test 98% 1021 1434 Junit Test method: @Test public final void testAddFlow() throws Exception { Client c = Client.create(); WebResource webResource = c.resource(BASE_URI); // Sample files for Add String xhtmlDocument = null; Iterator iter = mapOfAddFiles.entrySet().iterator(); while (iter.hasNext()) { Map.Entry pairs = (Map.Entry) iter.next(); try { document = helper.readFile(requestPath + pairs.getKey()); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } /* POST */ MultiPart multiPart = new MultiPart(); multiPart.bodyPart(.... ........... ClientResponse response = webResource.path("/add").type( MEDIATYPE_MULTIPART_MIXED).post(ClientResponse.class, multiPart); assertEquals("TESTING ADD FOR >>>>>>> " + pairs.getKey(), Status.OK, response.getClientResponseStatus()); } } } Invoked service method: @POST @Path("add") @Consumes("multipart/mixed") public Response add(MultiPart multiPart) throws Exception { Status status = null; List<BodyPart> bodyParts = null; bodyParts = multiPart.getBodyParts(); status = //call to business layer return Response.ok(status).build(); }

    Read the article

  • Twitter Oauth home timeline display with php

    - by Srinivas Tamada
    $hometime= $Twitter-get_statusesHome_timeline(); Fatal error: Uncaught exception 'Exception' with message 'SimpleXMLElement::__construct() expects parameter 1 to be string <?php include 'EpiCurl.php'; include 'EpiOAuth.php'; include 'EpiTwitter.php'; include 'key.php'; $Twitter = new EpiTwitter($consumerKey, $consumerSecret); $oauthToken='xxxxxxxxxxxxxxxxxxxxxxx'; $oauthSecret='xxxxxxxxxxxxxxxxxxxxxxxxx'; // user switched pages and came back or got here directly, stilled logged in $Twitter->setToken($oauthToken,$oauthSecret); $user= $Twitter->get_accountVerify_credentials(); echo "<img src=\"{$user->profile_image_url}\">"; echo "{$user->name}"; $hometime= $Twitter->get_statusesHome_timeline(); $twitter_status = new SimpleXMLElement($hometime); foreach($twitter_status->status as $status){ echo '<div class="twitter_status">'; foreach($status->user as $user){ echo '<img src="'.$user->profile_image_url.'" class="twitter_image">'; echo '<a href="http://www.twitter.com/'.$user->name.'">'.$user->name.'</a>: '; } echo $status->text; echo '<br/>'; echo '<div class="twitter_posted_at"><strong>Posted at:</strong> '.$status->created_at.'</div>'; echo '</div>'; } ?>

    Read the article

  • C# SQL Statement transformed TO LINQ how can i translate this statement to a working linq

    - by BlackTea
    I am having trouble with this I have 3 Data tables i use over and over again which are cached I would like to write a LINQ statement which would do the following is this possible? T-SQL VERSION: SELECT P.[CID],P.[AID] ,B.[AID], B.[Data], B.[Status], B.[Language] FROM MY_TABLE_1 P JOIN ( SELECT A.[AID], A.[Data], A.[Status], A.[Language] FROM MY_TABLE_2 A UNION ALL SELECT B.[AID], B.[Data], B.[Status], B.[Language] FROM MY_TABLE_3 B ) B on P.[AID] = B.[AID] WHERE B.[Language] = 'EN' OR B.[Language] = 'ANY' AND B.STATUS = 1 AND B.[Language] = 'EN' OR B.[Language] = 'ANY' AND B.STATUS = 1 Then i would like it to create a result set of the following Results: |CID|AID|DATA|STATUS|LANGUAGE

    Read the article

  • SHAREPOINT 2007 - Record Read Only

    - by Luis
    Hi All, In a "Issue" list, I have a mandatory field called "Status", to control the status (Identified, implemented, postponed, etc.). I need to control the behavior of each record based on that field content. For certain status the record should became read only, and only admin users should be able to edit / change it. Is there a way to implement this functionality? Thanks in advance. Regards Luis

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >