Search Results

Search found 18925 results on 757 pages for 'items state'.

Page 133/757 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Why does this sql statement keep saying it is a boolean and not a parameter? (php/Mysql)

    - by ggfan
    In this statement, I am trying to see if there if the latest posting in the database that has the exact same title, price, city, state, detail. If there is, then it would say to the user that the exact post has been already made; if not then insert the posting into the dbc. (This is one type of check so that users can't accidentally post twice. This may not be the best check, but this statement error is annoying me, so I want it to work :)) Why won't this sql work? I think it's not letting the title=$title and not getting the value in the $title... ERROR: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in postad.php on line 365 //there is a form that users fill out that has title, price, city, etc <form> blah blah </form> //if users click submit, then does all the checks and if all okay, insert to dbc if (isset($_POST['submit'])) { // Grab the pposting data from the POST and gets rid of any funny stuff $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $price = mysqli_real_escape_string($dbc, trim($_POST['price'])); $city = mysqli_real_escape_string($dbc, trim($_POST['city'])); $state = mysqli_real_escape_string($dbc, trim($_POST['state'])); $detail = mysqli_real_escape_string($dbc, trim($_POST['detail'])); if (!is_numeric($price) && !empty($price)) { echo "<p class='error'>The price can only be numbers. No special characters, etc</p>"; } //Error problem...won't let me set title=$title, detail=$detail, etc. //this statement after all the checks so that none of the variables are empty $query="Select * FROM posting WHERE user_id={$_SESSION['user_id']} AND title=$title AND price=$price AND city=$city AND state=$state AND detail=$detail"; $data = mysqli_query($dbc, $query); if(mysqli_num_rows($data)==1) { echo "You already posted this ad. Most likely caused by refreshing too many times."; } }

    Read the article

  • How to open multiple socket connections and do callbacks in PHP

    - by Click Upvote
    I'm writing some code which processes a queue of items. The way it works is this: Get the next item flagged as needing to be processed from the mysql database row. Request some info from a google API using Curl, wait until the info is returned. Do the remainder of the processing based on the info returned. Flag the item as processed in the db, move onto the next item. The problem is that on step # 2. Google sometimes takes 10-15 seconds to return the requested info, during this time my script has to remain halted and wait. I'm wondering if I could change the code to do the following instead: Get the next 5 items to be processed as usual. Request info for items 1-5 from google, one after the other. When the info for item 1 is returned, a 'callback' should be done which calls up a function or otherwise calls some code which then does the remainder of the processing on items 1-5. And then the script starts over until all pending items in db are marked processed. How can something like this be achieved?

    Read the article

  • How to return result set based on other rows

    - by understack
    I've 2 tables - packages and items. Items table contains all items belonging to the packages along with location information. Like this: Packages table id, name, type(enum{general,special}) 1, name1, general 2, name2, special Items table id, package_id, location 1, 1, America 2, 1, Africa 3, 1, Europe 4, 2, Europe Question: I want to find all 'special' packages belonging to a location and if no special package is found then it should return 'general' packages belonging to same location. So, for 'Europe' : package 2 should be returned since it is special package (Though package 1 also belongs to Europe but not required since its a general package) for 'America' : package 1 should be returned since there are no special packages

    Read the article

  • Create ordering in a MySQL table without using a number (because then it's hard to put something in

    - by user347256
    I have a long list of items (say, a few million items) in a mysql table, let's call it mytable and it has the field mytable.itemid. The items are given an order, and can be re=ordered by the user by drag and drop. If I add a field called mytable.order and just put numbers in them, it creates problems: what if I want to move an item between 2 other items? Then all the order fields have to be updated? That seems like a nightmare. Is there a (scalable) way to add order to a table that is different from just giving every item a number, order by that, and do loads of SQL queries everytime the order is changed?

    Read the article

  • Why does java.util.concurrent.ArrayBlockingQueue use 'while' loops instead of 'if' around calls to

    - by theFunkyEngineer
    I have been playing with my own version of this, using 'if', and all seems to be working fine. Of course this will break down horribly if signalAll() is used instead of signal(), but if only one thread at a time is notified, how can this go wrong? Their code here - check out the put() and take() methods; a simpler and more-to-the-point implementation can be seen at the top of the JavaDoc for Condition. Relevant portion of my implementation below. public Object get() { lock.lock(); try { if( items.size() < 1 ) hasItems.await(); Object poppedValue = items.getLast(); items.removeLast(); hasSpace.signal(); return poppedValue; } catch (InterruptedException e) { e.printStackTrace(); return null; } finally { lock.unlock(); } } public void put(Object item) { lock.lock(); try { if( items.size() >= capacity ) hasSpace.await(); items.addFirst(item); hasItems.signal(); return; } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); } } P.S. I know that generally, particularly in lib classes like this, one should let the exceptions percolate up.

    Read the article

  • C# Improved algorithm

    - by generixs
    I have been asked at interview (C# 3.0) to provide a logic to remove a list of items from a list. I responded int[] items={1,2,3,4}; List<int> newList = new List<int>() { 1, 2, 3, 4, 5, 56, 788, 9 }; newList.RemoveAll((int i) => { return items.Contains(i); }); 1) The interviewer replied that the algorithm i had employed will gradually take time if the items grow and asked me to give even better and faster one.What would be the efficient algorithm ? 2) How can i achieve the same using LINQ? 3) He asked me to provide an example for Two-Way-Closure? (General I am aware of closure, what is Two-Way-Closure?, I replied there is no such term exists,but he did not satisfy).

    Read the article

  • Empty value when iterating a dictionary with .iteritems() method

    - by ptpatil
    I am having some weird trouble with dictionaries, I am trying to iterate pairs from a dictionary to pass to another function. The loop for the iterator though for some reason always returns empty values. Here is the code: def LinktoCentral(self, linkmethod): if linkmethod == 'sim': linkworker = Linker.SimilarityLinker() matchlist = [] for k,v in self.ToBeMatchedTable.iteritems(): matchlist.append(k, linkworker.GetBestMatch(v, self.CentralDataTable.items())) Now if I insert a print line above the for loop: matchlist = [] print self.ToBeMatchedTable.items() for k,v in self.ToBeMatchedTable.iteritems(): matchlist.append(k, linkworker.GetBestMatch(v, self.CentralDataTable.items())) I get the data that is supposed to be in the dictionary printed out. The values of the dictionary are list objects. An example tuple I get from the dictionary when printing just above the for loop: >>> (1, ['AARP/United Health Care', '8002277789', 'PO Box 740819', 'Atlanta', 'GA', '30374-0819', 'Paper', '3676']) However, the for loop gives empty lists to the linkworker.GetBestMatch method. If I put a print line just below the for loop, here is what I get: Code: matchlist = [] for k,v in self.ToBeMatchedTable.iteritems(): print self.ToBeMatchedTable.items() matchlist.append(k, linkworker.GetBestMatch(v, self.CentralDataTable.items())) ## Place holder for line to send match list to display window return matchlist Result of first iteration: >>> (0, ['', '', '', '', '', '', '', '']) I literally have no idea whats going on, there is nothing else going on while this loop is executed. Any stupid mistakes I made?

    Read the article

  • Enable/Disable Input based on selection (jQuery)

    - by Nimbuz
    <select name="state" class="select" id="state"> <option value="something">Something</option> <option value="other">Other</option> </select> <input type="text" name="province" class="text" id="province" /> jQuery $('#state').change(function () { if ($('#state Other:selected').text() == "Other"){ $('#province').attr('disabled', false); alert(1); } else { alert(2); } }); Doesn't seem to work. I must be doing something wrong.

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • Metro: Understanding the default.js File

    - by Stephen.Walther
    The goal of this blog entry is to describe — in painful detail — the contents of the default.js file in a Metro style application written with JavaScript. When you use Visual Studio to create a new Metro application then you get a default.js file automatically. The file is located in a folder named \js\default.js. The default.js file kicks off all of your custom JavaScript code. It is the main entry point to a Metro application. The default contents of the default.js file are included below: // For an introduction to the Blank template, see the following documentation: // http://go.microsoft.com/fwlink/?LinkId=232509 (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { if (eventObject.detail.previousExecutionState !== Windows.ApplicationModel.Activation.ApplicationExecutionState.terminated) { // TODO: This application has been newly launched. Initialize // your application here. } else { // TODO: This application has been reactivated from suspension. // Restore application state here. } WinJS.UI.processAll(); } }; app.oncheckpoint = function (eventObject) { // TODO: This application is about to be suspended. Save any state // that needs to persist across suspensions here. You might use the // WinJS.Application.sessionState object, which is automatically // saved and restored across suspension. If you need to complete an // asynchronous operation before your application is suspended, call // eventObject.setPromise(). }; app.start(); })(); There are several mysterious things happening in this file. The purpose of this blog entry is to dispel this mystery. Understanding the Module Pattern The first thing that you should notice about the default.js file is that the entire contents of this file are enclosed within a self-executing JavaScript function: (function () { ... })(); Metro applications written with JavaScript use something called the module pattern. The module pattern is a common pattern used in JavaScript applications to create private variables, objects, and methods. Anything that you create within the module is encapsulated within the module. Enclosing all of your custom code within a module prevents you from stomping on code from other libraries accidently. Your application might reference several JavaScript libraries and the JavaScript libraries might have variables, objects, or methods with the same names. By encapsulating your code in a module, you avoid overwriting variables, objects, or methods in the other libraries accidently. Enabling Strict Mode with “use strict” The first statement within the default.js module enables JavaScript strict mode: 'use strict'; Strict mode is a new feature of ECMAScript 5 (the latest standard for JavaScript) which enables you to make JavaScript more strict. For example, when strict mode is enabled, you cannot declare variables without using the var keyword. The following statement would result in an exception: hello = "world!"; When strict mode is enabled, this statement throws a ReferenceError. When strict mode is not enabled, a global variable is created which, most likely, is not what you want to happen. I’d rather get the exception instead of the unwanted global variable. The full specification for strict mode is contained in the ECMAScript 5 specification (look at Annex C): http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf Aliasing the WinJS.Application Object The next line of code in the default.js file is used to alias the WinJS.Application object: var app = WinJS.Application; This line of code enables you to use a short-hand syntax when referring to the WinJS.Application object: for example,  app.onactivated instead of WinJS.Application.onactivated. The WinJS.Application object  represents your running Metro application. Handling Application Events The default.js file contains an event handler for the WinJS.Application activated event: app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { if (eventObject.detail.previousExecutionState !== Windows.ApplicationModel.Activation.ApplicationExecutionState.terminated) { // TODO: This application has been newly launched. Initialize // your application here. } else { // TODO: This application has been reactivated from suspension. // Restore application state here. } WinJS.UI.processAll(); } }; This WinJS.Application class supports the following events: · loaded – Happens after browser DOMContentLoaded event. After this event, the DOM is ready and you can access elements in a page. This event is raised before external images have been loaded. · activated – Triggered by the Windows.UI.WebUI.WebUIApplication activated event. After this event, the WinRT is ready. · ready – Happens after both loaded and activated events. · unloaded – Happens before application is unloaded. The following default.js file has been modified to capture each of these events and write a message to the Visual Studio JavaScript Console window: (function () { "use strict"; var app = WinJS.Application; WinJS.Application.onloaded = function (e) { console.log("Loaded"); }; WinJS.Application.onactivated = function (e) { console.log("Activated"); }; WinJS.Application.onready = function (e) { console.log("Ready"); } WinJS.Application.onunload = function (e) { console.log("Unload"); } app.start(); })(); When you execute the code above, a message is written to the Visual Studio JavaScript Console window when each event occurs with the exception of the Unload event (presumably because the console is not attached when that event is raised).   Handling Different Activation Contexts The code for the activated handler in the default.js file looks like this: app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { if (eventObject.detail.previousExecutionState !== Windows.ApplicationModel.Activation.ApplicationExecutionState.terminated) { // TODO: This application has been newly launched. Initialize // your application here. } else { // TODO: This application has been reactivated from suspension. // Restore application state here. } WinJS.UI.processAll(); } }; Notice that the code contains a conditional which checks the Kind of the event (the value of e.detail.kind). The startup code is executed only when the activated event is triggered by a Launch event, The ActivationKind enumeration has the following values: · launch · search · shareTarget · file · protocol · fileOpenPicker · fileSavePicker · cacheFileUpdater · contactPicker · device · printTaskSettings · cameraSettings Metro style applications can be activated in different contexts. For example, a camera application can be activated when modifying camera settings. In that case, the ActivationKind would be CameraSettings. Because we want to execute our JavaScript code when our application first launches, we verify that the kind of the activation event is an ActivationKind.Launch event. There is a second conditional within the activated event handler which checks whether an application is being newly launched or whether the application is being resumed from a suspended state. When running a Metro application with Visual Studio, you can use Visual Studio to simulate different application execution states by taking advantage of the Debug toolbar and the new Debug Location toolbar.  Handling the checkpoint Event The default.js file also includes an event handler for the WinJS.Application checkpoint event: app.oncheckpoint = function (eventObject) { // TODO: This application is about to be suspended. Save any state // that needs to persist across suspensions here. You might use the // WinJS.Application.sessionState object, which is automatically // saved and restored across suspension. If you need to complete an // asynchronous operation before your application is suspended, call // eventObject.setPromise(). }; The checkpoint event is raised when your Metro application goes into a suspended state. The idea is that you can save your application data when your application is suspended and reload your application data when your application resumes. Starting the Application The final statement in the default.js file is the statement that gets everything going: app.start(); Events are queued up in a JavaScript array named eventQueue . Until you call the start() method, the events in the queue are not processed. If you don’t call the start() method then the Loaded, Activated, Ready, and Unloaded events are never raised. Summary The goal of this blog entry was to describe the contents of the default.js file which is the JavaScript file which you use to kick off your custom code in a Windows Metro style application written with JavaScript. In this blog entry, I discussed the module pattern, JavaScript strict mode, handling first chance exceptions, WinJS Application events, and activation contexts.

    Read the article

  • Notes on Oracle BPM PS6 Adaptive Case Management

    - by gcolman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I have recently been looking at the  latest release of the BPM Case Management feature in the Oracle BPM PS6 release. I had put together some notes to help me gain a better understanding of the context of the PS6 BPM Case Management. Hopefully, this along with the other resources will enable you to gain a clear picture of the flexibility of this feature. Oracle BPM PS6 release includes Case Management capability. This initial release aims to provide: Case Management Framework Integration of Case Management with BPM & SOA suite It is best to regard the current PS6 case management feature as a case management framework. The framework provides the building blocks for creating a case management system that is fully integrated into Oracle BPM suite. As of the current PS6 release, no UI tooling exists to help manage cases or the case lifecycle. Mark Foster has written a good blog which outlines Case Management within PS6 in the following link. I wanted to provide more context on Case Management from my perspective in this blog. PS6 Case Management - High level View BPM PS6 includes “Case” as a first class component in a SOA Suite composite. The Case components (added to the SOA Composite) are created when a BPM process is assigned to a case in JDveloper. The SOA Case component is defined and configured within JDevloper, which allows us to specify the case data structures and metadata such as stakeholders, outcomes, milestones, document stores etc. "Activities" are associated with a case, and become available to be executed via the case apis. Activities are BPM processes, Human Activities or Java call outs. The PS6 release includes some additional database tables to store the case metadata and case instance data (data object, comments, etc…). These new tables are created within the SOA_INFRA schema and the documents associated with that case into a document repository that is configured with the case. One of the main features of Case Management is the control of the case logic through case events and case business rules. A PS6 Case has an associated business rule component, which can be configured to control the availability and execution of activities within the case. The business rules component is able to act upon events that the PS6 Case Management framework generates during the lifecycle of that case. Events are fired during the lifetime of the case (e.g. Case created, activity started, activity ended, note added, document uploaded.) Internal Case state The internal state of a case is represented by the diagram below. This shows the internal states and the transition paths for a Case from one state to the next Each transition in state will create an event that can be enacted upon via the Case rules engine. The internal case state lifecycle is defined as follows Defining a case A Case is created and defined as a component of a JDeveloper BPM project. When you create a Case as part of a BPM project, JDeveloper, creates the following components within the SCA composite: Case component Case component interfaces (WSDL etc) Case Rules component (Oracle Business Rules) Adds the Case Component and Case Rules Component to the BPM SOA composite Case Configuration The following section gives a high level overview of the items that can be configured for a BPM Case. Case Activities A Case is associated with a set of activities that are to be performed as part of that Case. Case activities can be: SOA Human Tasks BPM processes Custom Task (Java Class) Case activities are created from pre-existing BPM process or human tasks, which, once defined, can be configured additionally as Case activities in JDeveloper and made available within the lifecycle of a case. I've described the following configurable components of a case (very!) briefly as: Milestones Milestones are (optional) user defined logical milestones that can be achieved within a case. No activities are associates with a milestone, but milestone attainment can be programmatically set and events raised when milestones are reached Outcomes User defined status of a completed case. An event is fired when an outcome is attained. Case Data Defines the data that will be stored with a case XML schemas define the data that is stored with the case. Case Documents Defines the location of documents that are attached to a case (e.g. WebCenter Content) User Defined Events Optional user defined events that can be fired or captured to drive case processing rules Stakeholders Defines the actors who can participate in the case (roles, users, groups) Defines permissions for individual case permissions (read case, create document etc…) Business Rules Business rules are the main component controlling the flow of a Case Each case has an associated business ruleset Rules are fired on receiving Case events (or User defined events) Life cycle events Milestone events Activity events Data events Document events Comment events User event Managing the Case Managing the lifecycle of a case is achieved in two ways: Managing case logic with Business Rules Managing the case lifecycle via the Case APIs. A BPM Case can be viewed as a set of case data & documents along with the activities that can be performed within a case and also the case lifecycle state expressed as milestones and internal lifecycle state. The management of the case life is achieved though both the configuration of business rules and the “manual” interaction with a case instance through the Case APIs. Business Rules and Case Events A key component within the Case management framework is the event model. The BPM Case Management solution internally utilizes Oracle EDN (Event Delivery Network) to publish and subscribe to events generated by the Case framework. Events are generated by the Case framework on each of the processes and stages that a case instance will travel on its lifetime. The following case events are part of the BPM Case: Life cycle events Milestone events Activity events Data events Document events Comment events User event The Case business rules are configured to listen for these events, and business logic can be coded into the Case rules component to enact upon an event being received. Case API & Interaction Along with the business rules component, Cases can be managed via the Case API interfaces. These interfaces allow for the building of custom applications to integrate into case management framework. The API’s allow for updating case comments & documents, executing case activities, updating milestones etc. As there is no in built case management UI functions within the PS6 release, Cases need to be managed via a custom built UI, interacting with selected case instances, launching case activities, closing cases etc. (There is expected to be a UI component within subsequent releases) Logical Case Flow The diagram below is intended to depict a logical view of the case steps for a typical case. A UI or other service calls the Case interface to create a Case instance The case instance is created & database data inserted A lifecycle event is raised indicating a case activity (created) event The case business rules capture the event and decide on an action to take Additionally other parties can subscribe to Case events via EDN The business rules may handle the event, e.g. configured to execute a case activity on case creation event The BPM/Human Workflow/Custom activity is executed A case activity event is raised on the execute activity A case work UI or business service can inspect the case instance and call other actions to progress that case, such as: Execute activity Add Note Add document Add case data Update Milestone Raise user defined event Suspend case Resume case Close Case Summary Having had a little time to play around with the APIs and the case configuration, I really like the flexibility and power of combining Oracle Business Rules and the BPM Case Management event model. Creating something this flexible and powerful without BPM Case Management would take a lot of time and effort. This is hopefully going to save my customers a lot of time and effort! I may make amendments to this post as my understanding of Case Management increases! Take a look at the following links for official documentation etc. http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm https://blogs.oracle.com/bpm/entry/just_in_case Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • ?????Exadata????

    - by Liu Maclean(???)
    ??check Exadata Image & OS versions , GI & DB patches sundiag exacheck cellserv ==> imageinfo dbhost ==> /usr/local/bin/imagehistory Also check the version of the switch. Login to Switch and execute the following command [root@myswitch-1 sbin]# version [root@dmorlsw-ib2 sbin]# cd /usr/local/bin [root@dmorlsw-ib2 bin]# ls -lrt version -rwxr-xr-x 1 root root 20356 Apr 4 2011 version Output will look as below. [root@dmorlsw-ib2 ~]# version SUN DCS 36p version: 1.3.3-2 Build time: Apr 4 2011 11:15:19 SP board info: Manufacturing Date: 2009.05.05 Serial Number: "NCD3X0178" Hardware Revision: 0x0006 Firmware Revision: 0x0102 BIOS version: NOW1R112 BIOS date: 04/24/2009 ib8# cat /sys/class/infiniband/is4_0/fw_ver 7.2.300 ib8 # cat /sys/class/dmi/id/bios_version NOW1R112 ib8 # nm2version NM2-36p version: 1.0.1-1 Build time: Sep 14 2009 12:52:51 ComExpress info: Manufacturing Date: 2009.08.19 Serial Number: Hardware Revision: 0x0006 Firmware Revision: 0x0102 { case `uname` in Linux ) ILOM="/usr/bin/ipmitool sunoem cli" ;; SunOS ) ILOM="/opt/ipmitool/bin/ipmitool sunoem cli" ;; esac ; ImageInfo="/opt/oracle.cellos/imageinfo" ; uname -srm ; head -1 /etc/*release ; uptime | cut -d, -f1 ; $ILOM "show /SP system_description system_identifier" | grep = ; $ImageInfo -activated -node -status -ver | grep -v ^$ ; } | tee /tmp/ExaInfo.log $GRID_HOME/OPatch/opatch lsinv -all -oh $GRID_HOME | tee /tmp/OPatchInv.log $ORACLE_HOME/OPatch/opatch lsinv -all | tee -a /tmp/OPatchInv.log cat /tmp/ExaInfo.log Linux 2.6.18-128.1.16.0.1.el5 x86_64 ==> /etc/enterprise-release <== Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) ==> /etc/redhat-release <== Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) 20:37:56 up 458 days system_description = SUN FIRE X4170 SERVER, ILOM v3.0.6.10.b, r52264 system_identifier = Sun Oracle Database Machine Active image version: 11.2.1.2.3 Active image activated: XXXX-XX-XX 12:27:12 +0800 Active image status: success Active node type: COMPUTE Inactive image version: undefined FileName: OPatchInv.log ---------------- ... Oracle Home       : /u01/app/11.2.0/grid Central Inventory : /u01/app/oraInventory   from           : /etc/oraInst.loc OPatch version    : 11.2.0.1.2 OUI version       : 11.2.0.1.0 OUI location      : /u01/app/11.2.0/grid/oui ... -------------------------------------------------------------------------------- List of Oracle Homes:   Name                                       Location   Ora11g_gridinfrahome1         /u01/app/11.2.0/grid   OraDb11g_home1                  /u01/app/oracle/product/11.2.0/dbhome_1 -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure                                           11.2.0.1.0 ... Interim patches (2) : Patch  9524394      : applied on Thu Jun 03 20:46:05 CST 2010 ... {TRACKING BUG FOR 11.2.0.1 DB MACHINE BUNDLE PATCH 3} Patch  9455587      : applied on Fri Apr 02 18:27:47 CST 2010 ... {MERGE REQUEST ON TOP OF 11.2.0.1.0 FOR BUGS 8483425 8667622 8702731 8730804} Rac system comprising of multiple nodes  Local node = dbserv01  Remote node = dbserv02  Remote node = dbserv03  Remote node = dbserv04 -------------------------------------------------------------------------------- OPatch succeeded. ... Oracle Home       : /u01/app/oracle/product/11.2.0/dbhome_1 ... Oracle Database 11g                                                  11.2.0.1.0 ... Interim patches (5) : Patch  8888434      : applied on Sat Jan 08 00:27:33 CST 2011 ... {AIX-ASM-CF: LMHB TERMINATE INSTANCE WHEN OFFLINE ONE FAILGROUP IN ASM DG} Patch  8730312      : applied on Thu Jun 03 21:30:03 CST 2010 ... {FWD MERGE FOR BASE BUG 8715387 FOR 12G} Patch  9502717      : applied on Thu Jun 03 21:25:54 CST 2010 ... {LMS HIT ORA-600 [KJBLDRMNEXTPKEY:SEEN] AND CRASHED THE INSTANCE} { + same 2 as GI above} ?? cell server Cache Policy cell08# MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy' Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU cell09# MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy' Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Cache policy is in WB Would recommend proactive  battery repalcement. Example : a. /opt/MegaRAID/MegaCli/MegaCli64 -LDGetProp  -Cache -LALL -aALL ####( Will list the cache policy) b. /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp  -WB  -LALL -aALL ####( Will try to change teh policy from xx to WB)     So policy Change to WB will not come into effect immediately     Set Write Policy to WriteBack on Adapter 0, VD 0 (target id: 0) success     Battery capacity is below the threshold value ??cell BBU??????: cell08# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0 BBU status for Adapter: 0 BatteryType: iBBU Voltage: 4061 mV Current: 0 mA Temperature: 36 C BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : Yes Periodic Learn Required : No Battery state: GasGuageStatus: Fully Discharged : No Fully Charged : Yes Discharging : Yes Initialized : Yes Remaining Time Alarm : No Remaining Capacity Alarm: No Discharge Terminated : No Over Temperature : No Charging Terminated : No Over Charged : No Relative State of Charge: 99 % Charger System State: 49168 Charger System Ctrl: 0 Charging current: 0 mA Absolute state of charge: 21 % Max Error: 2 % Exit Code: 0x00 ????BBU ??: dcli -g ~/cell_group -l root -t '{ uname -srm ; head -1 /etc/*release ; uptime | cut -d, -f1 ; imagehistory ; ipmitool sunoem cli "show /SP system_description system_identifier" | grep = ; ipmitool sunoem cli "show /SP/policy FLASH_ACCELERATOR_CARD_INSTALLED /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0 | egrep -i 'BBU|Battery|Charge:|Fully|Low|Learn' ; }' | tee /tmp/ExaInfo.log Target cells: ['cellserv01', 'cellserv02', 'cellserv03', 'cellserv04', 'cellserv05', 'cellserv06', 'cellserv07'] cellserv01: Linux 2.6.18-128.1.16.0.1.el5 x86_64 cellserv01: ==> /etc/enterprise-release <== cellserv01: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) cellserv01: cellserv01: ==> /etc/redhat-release <== cellserv01: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) cellserv01: 01:17:39 up 635 days cellserv01: Version : 11.2.1.2.1 cellserv01: Image activation date : 2011-03-25 11:59:34 -0800 cellserv01: Imaging mode : fresh cellserv01: Imaging status : success cellserv01: cellserv01: Version : 11.2.1.2.3 cellserv01: Image activation date : 2011-04-13 12:15:46 +0800 cellserv01: Imaging mode : patch cellserv01: Imaging status : success cellserv01: cellserv01: Version : 11.2.1.2.6 cellserv01: Image activation date : 2011-05-27 23:08:22 +0800 cellserv01: Imaging mode : patch cellserv01: Imaging status : success cellserv01: cellserv01: system_description = SUN FIRE X4275 SERVER, ILOM v3.0.6.10.b, r52264 cellserv01: system_identifier = Sun Oracle Database Machine cellserv01: Connected. Use ^D to exit. cellserv01: -> show /SP/policy FLASH_ACCELERATOR_CARD_INSTALLED cellserv01: show: No matching properties found. cellserv01: cellserv01: -> Session closed cellserv01: Disconnected cellserv01: BBU status for Adapter: 0 cellserv01: BatteryType: iBBU cellserv01: BBU Firmware Status: cellserv01: Learn Cycle Requested : No cellserv01: Learn Cycle Active : No cellserv01: Learn Cycle Status : OK cellserv01: Learn Cycle Timeout : No cellserv01: Battery Pack Missing : No cellserv01: Battery Replacement required : No cellserv01: Remaining Capacity Low : Yes cellserv01: Periodic Learn Required : No cellserv01: Battery state: cellserv01: Fully Discharged : No cellserv01: Fully Charged : Yes cellserv01: Relative State of Charge: 99 % cellserv01: Absolute state of charge: 21 % dcli -l root -g /root/all_group '/opt/MegaRAID/MegAaCli/MegaCli64 -AdpBbuCmd -a0' > BBU.out check ipmi: dcli -g ~/cell_group -l root -t '{ > ipmitool sunoem cli "show /SP/policy FLASH_ACCELERATOR_CARD_INSTALLED" | grep = ; MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy' ; }' | tee /tmp/ExaCells.log

    Read the article

  • ASP.NET Podcast Show #148 - ASP.NET WebForms to build a Mobile Web Application

    - by Wallym
    Check the podcast site for the original url. This is the video and source code for an ASP.NET WebForms app that I wrote that is optimized for the iPhone and mobile environments.  Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download M4V for iPhone/iPad. Download MP3. Link to iWebKit. Source Code: <%@ Page Title="MapSplore" Language="C#" MasterPageFile="iPhoneMaster.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="AT_iPhone_Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="Content" Runat="Server" ClientIDMode="Static">    <asp:ScriptManager ID="sm" runat="server"         EnablePartialRendering="true" EnableHistory="false" EnableCdn="true" />    <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>    <script  language="javascript"  type="text/javascript">    <!--    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequestHandle);    function endRequestHandle(sender, Args) {        setupMapDiv();        setupPlaceIveBeen();    }    function setupPlaceIveBeen() {        var mapPlaceIveBeen = document.getElementById('divPlaceIveBeen');        if (mapPlaceIveBeen != null) {            var PlaceLat = document.getElementById('<%=hdPlaceIveBeenLatitude.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceIveBeenLongitude.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=lblPlaceIveBeenName.ClientID %>').innerHTML;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapPlaceIveBeen, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }    }    function setupMapDiv() {        var mapdiv = document.getElementById('divImHere');        if (mapdiv != null) {            var PlaceLat = document.getElementById('<%=hdPlaceLat.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceLon.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=hdPlaceTitle.ClientID %>').value;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapdiv, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }     }    -->    </script>    <asp:HiddenField ID="Latitude" runat="server" />    <asp:HiddenField ID="Longitude" runat="server" />    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js%22%3E%3C/script>    <script language="javascript" type="text/javascript">        $(document).ready(function () {            GetLocation();            setupMapDiv();            setupPlaceIveBeen();        });        function GetLocation() {            if (navigator.geolocation != null) {                navigator.geolocation.getCurrentPosition(getData);            }            else {                var mess = document.getElementById('<%=Message.ClientID %>');                mess.innerHTML = "Sorry, your browser does not support geolocation. " +                    "Try the latest version of Safari on the iPhone, Android browser, or the latest version of FireFox.";            }        }        function UpdateLocation_Click() {            GetLocation();        }        function getData(position) {            var latitude = position.coords.latitude;            var longitude = position.coords.longitude;            var hdLat = document.getElementById('<%=Latitude.ClientID %>');            var hdLon = document.getElementById('<%=Longitude.ClientID %>');            hdLat.value = latitude;            hdLon.value = longitude;        }    </script>    <asp:Label ID="Message" runat="server" />    <asp:UpdatePanel ID="upl" runat="server">        <ContentTemplate>    <asp:Panel ID="pnlStart" runat="server" Visible="true">    <div id="topbar">        <div id="title">MapSplore</div>    </div>    <div id="content">        <ul class="pageitem">            <li class="menu">                <asp:LinkButton ID="lbLocalDeals" runat="server" onclick="lbLocalDeals_Click">                <asp:Image ID="imLocalDeals" runat="server" ImageUrl="~/Images/ArtFavor_Money_Bag_Icon.png" Height="30" />                <span class="name">Local Deals.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbLocalPlaces" runat="server" onclick="lbLocalPlaces_Click">                <asp:Image ID="imLocalPlaces" runat="server" ImageUrl="~/Images/Andy_Houses_on_the_horizon_-_Starburst_remix.png" Height="30" />                <span class="name">Local Places.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbWhereIveBeen" runat="server" onclick="lbWhereIveBeen_Click">                <asp:Image ID="imImHere" runat="server" ImageUrl="~/Images/ryanlerch_flagpole.png" Height="30" />                <span class="name">I've been here.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbMyStats" runat="server">                <asp:Image ID="imMyStats" runat="server" ImageUrl="~/Images/Anonymous_Spreadsheet.png" Height="30" />                <span class="name">My Stats.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbAddAPlace" runat="server" onclick="lbAddAPlace_Click">                <asp:Image ID="imAddAPlace" runat="server" ImageUrl="~/Images/jean_victor_balin_add.png" Height="30" />                <span class="name">Add a Place.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="button">                <input type="button" value="Update Your Current Location" onclick="UpdateLocation_Click()">                </li>        </ul>    </div>    </asp:Panel>    <div>    <asp:Panel ID="pnlCoupons" runat="server" Visible="false">        <div id="topbar">        <div id="title">MapSplore</div>        <div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromDeals" OnClick="ReturnFromDeals_Click" /></div></div>    <div class="content">    <asp:ListView ID="lvCoupons" runat="server">        <LayoutTemplate>            <ul class="pageitem" runat="server">                <asp:PlaceHolder ID="itemPlaceholder" runat="server" />            </ul>        </LayoutTemplate>        <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Place.Name") %>' OnClick="lbBusiness_Click">                    <span class="comment">                    <asp:Label ID="lblAddress" runat="server" Text='<%#Eval("Place.Address1") %>' />                    <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Place.Distance"))) + " meters" %>' CssClass="smallText" />                    <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                    <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                    </span>                    <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView><asp:GridView ID="gvCoupons" runat="server" AutoGenerateColumns="false">            <HeaderStyle BackColor="Silver" />            <AlternatingRowStyle BackColor="Wheat" />            <Columns>                <asp:TemplateField AccessibleHeaderText="Business" HeaderText="Business">                    <ItemTemplate>                        <asp:Image ID="imPlaceType" runat="server" Text='<%#Eval("Type") %>' ImageUrl='<%#Eval("Image") %>' />                        <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Name") %>' OnClick="lbBusiness_Click" />                        <asp:LinkButton ID="lblAddress" runat="server" Text='<%#Eval("Address1") %>' CssClass="smallText" />                        <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>' CssClass="smallText" />                        <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                        <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                        <asp:Label ID="lblInfo" runat="server" Visible="false" />                    </ItemTemplate>                </asp:TemplateField>            </Columns>        </asp:GridView>    </div>    </asp:Panel>    <asp:Panel ID="pnlPlaces" runat="server" Visible="false">    <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromPlaces" OnClick="ReturnFromPlaces_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvPlaces" runat="server">            <LayoutTemplate>                <ul id="ulPlaces" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                    <li class="menu">                        <asp:LinkButton ID="lbNotListed" runat="server" CssClass="name"                            OnClick="lbNotListed_Click">                            Place not listed                            <span class="arrow"></span>                            </asp:LinkButton>                    </li>                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbImHere" runat="server" CssClass="name"                     OnClick="lbImHere_Click">                <%#DisplayName(Eval("Name")) %>&nbsp;                <%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>                <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView>    </div>    </asp:Panel>    <asp:Panel ID="pnlImHereNow" runat="server" Visible="false">        <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Places"                 ID="lbImHereNowReturn" OnClick="lbImHereNowReturn_Click" /></div></div>            <div id="rightbutton">            <asp:LinkButton runat="server" Text="Beginning"                ID="lbBackToBeginning" OnClick="lbBackToBeginning_Click" />            </div>        <div id="content">        <ul class="pageitem">        <asp:HiddenField ID="hdPlaceId" runat="server" />        <asp:HiddenField ID="hdPlaceLat" runat="server" />        <asp:HiddenField ID="hdPlaceLon" runat="server" />        <asp:HiddenField ID="hdPlaceTitle" runat="server" />        <asp:Button ID="btnImHereNow" runat="server"             Text="I'm here" OnClick="btnImHereNow_Click" />             <asp:Label ID="lblPlaceTitle" runat="server" /><br />        <asp:TextBox ID="txtWhatsHappening" runat="server" TextMode="MultiLine" Rows="2" style="width:300px" /><br />        <div id="divImHere" style="width:300px; height:300px"></div>        </div>        </ul>    </asp:Panel>    <asp:Panel runat="server" ID="pnlIveBeenHere" Visible="false">        <div id="topbar">        <div id="title">            Where I've been</div><div id="leftbutton">            <asp:LinkButton ID="lbIveBeenHereBack" runat="server" Text="Back" OnClick="lbIveBeenHereBack_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvWhereIveBeen" runat="server">            <LayoutTemplate>                <ul id="ulWhereIveBeen" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu" runat="server">                <asp:LinkButton ID="lbPlaceIveBeen" runat="server" OnClick="lbPlaceIveBeen_Click" CssClass="name">                    <asp:Label ID="lblPlace" runat="server" Text='<%#Eval("PlaceName") %>' /> at                    <asp:Label ID="lblTime" runat="server" Text='<%#Eval("ATTime") %>' CssClass="content" />                    <asp:HiddenField ID="hdATID" runat="server" Value='<%#Eval("ATID") %>' />                    <span class="arrow"></span>                </asp:LinkButton>            </li>            </ItemTemplate>        </asp:ListView>        </div>        </asp:Panel>    <asp:Panel runat="server" ID="pnlPlaceIveBeen" Visible="false">        <div id="topbar">        <div id="title">            I've been here        </div>        <div id="leftbutton">            <asp:LinkButton ID="lbPlaceIveBeenBack" runat="server" Text="Back" OnClick="lbPlaceIveBeenBack_Click" />        </div>        <div id="rightbutton">            <asp:LinkButton ID="lbPlaceIveBeenBeginning" runat="server" Text="Beginning" OnClick="lbPlaceIveBeenBeginning_Click" />        </div>        </div>        <div id="content">            <ul class="pageitem">            <li>            <asp:HiddenField ID="hdPlaceIveBeenPlaceId" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLatitude" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLongitude" runat="server" />            <asp:Label ID="lblPlaceIveBeenName" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenAddress" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCity" runat="server" />,             <asp:Label ID="lblPlaceIveBeenState" runat="server" />            <asp:Label ID="lblPlaceIveBeenZipCode" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCountry" runat="server" /><br />            <div id="divPlaceIveBeen" style="width:300px; height:300px"></div>            </li>            </ul>        </div>                </asp:Panel>         <asp:Panel ID="pnlAddPlace" runat="server" Visible="false">                <div id="topbar"><div id="title">MapSplore</div><div id="leftbutton"><asp:LinkButton ID="lbAddPlaceReturn" runat="server" Text="Back" OnClick="lbAddPlaceReturn_Click" /></div><div id="rightnav"></div></div><div id="content">    <ul class="pageitem">        <li id="liPlaceAddMessage" runat="server" visible="false">        <asp:Label ID="PlaceAddMessage" runat="server" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtPlaceName" runat="server" placeholder="Name of Establishment" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtAddress1" runat="server" placeholder="Address 1" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtCity" runat="server" placeholder="City" />        </li>        <li class="select">        <asp:DropDownList ID="ddlProvince" runat="server" placeholder="Select State" />          <span class="arrow"></span>              </li>        <li class="bigfield">        <asp:TextBox ID="txtZipCode" runat="server" placeholder="Zip Code" />        </li>        <li class="select">        <asp:DropDownList ID="ddlCountry" runat="server"             onselectedindexchanged="ddlCountry_SelectedIndexChanged" />        <span class="arrow"></span>        </li>        <li class="bigfield">        <asp:TextBox ID="txtPhoneNumber" runat="server" placeholder="Phone Number" />        </li>        <li class="checkbox">            <span class="name">You Here Now:</span> <asp:CheckBox ID="cbYouHereNow" runat="server" Checked="true" />        </li>        <li class="button">        <asp:Button ID="btnAdd" runat="server" Text="Add Place"             onclick="btnAdd_Click" />        </li>    </ul></div>        </asp:Panel>        <asp:Panel ID="pnlImHere" runat="server" Visible="false">            <asp:TextBox ID="txtImHere" runat="server"                 TextMode="MultiLine" Rows="3" Columns="40" /><br />            <asp:DropDownList ID="ddlPlace" runat="server" /><br />            <asp:Button ID="btnHere" runat="server" Text="Tell Everyone I'm Here"                 onclick="btnHere_Click" /><br />        </asp:Panel>     </div>    </ContentTemplate>    </asp:UpdatePanel> </asp:Content> Code Behind .cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using LocationDataModel; public partial class AT_iPhone_Default : ViewStatePage{    private iPhoneDevice ipd;     protected void Page_Load(object sender, EventArgs e)    {        LocationDataEntities lde = new LocationDataEntities();        if (!Page.IsPostBack)        {            var Countries = from c in lde.Countries select c;            foreach (Country co in Countries)            {                ddlCountry.Items.Add(new ListItem(co.Name, co.CountryId.ToString()));            }            ddlCountry_SelectedIndexChanged(ddlCountry, null);            if (AppleIPhone.IsIPad())                ipd = iPhoneDevice.iPad;            if (AppleIPhone.IsIPhone())                ipd = iPhoneDevice.iPhone;            if (AppleIPhone.IsIPodTouch())                ipd = iPhoneDevice.iPodTouch;        }    }    protected void btnPlaces_Click(object sender, EventArgs e)    {    }    protected void btnAdd_Click(object sender, EventArgs e)    {        bool blImHere = cbYouHereNow.Checked;        string Place = txtPlaceName.Text,            Address1 = txtAddress1.Text,            City = txtCity.Text,            ZipCode = txtZipCode.Text,            PhoneNumber = txtPhoneNumber.Text,            ProvinceId = ddlProvince.SelectedItem.Value,            CountryId = ddlCountry.SelectedItem.Value;        int iProvinceId, iCountryId;        double dLatitude, dLongitude;        DataAccess da = new DataAccess();        if ((!String.IsNullOrEmpty(ProvinceId)) &&            (!String.IsNullOrEmpty(CountryId)))        {            iProvinceId = Convert.ToInt32(ProvinceId);            iCountryId = Convert.ToInt32(CountryId);            if (blImHere)            {                dLatitude = Convert.ToDouble(Latitude.Value);                dLongitude = Convert.ToDouble(Longitude.Value);                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber,                    dLatitude, dLongitude);            }            else            {                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber);            }            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Awesome, your place has been added. Add Another!";            txtPlaceName.Text = String.Empty;            txtAddress1.Text = String.Empty;            txtCity.Text = String.Empty;            ddlProvince.SelectedIndex = -1;            txtZipCode.Text = String.Empty;            txtPhoneNumber.Text = String.Empty;        }        else        {            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Please select a State and a Country.";        }    }    protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e)    {        string CountryId = ddlCountry.SelectedItem.Value;        if (!String.IsNullOrEmpty(CountryId))        {            int iCountryId = Convert.ToInt32(CountryId);            LocationDataModel.LocationDataEntities lde = new LocationDataModel.LocationDataEntities();            var prov = from p in lde.Provinces where p.CountryId == iCountryId                        orderby p.ProvinceName select p;                        ddlProvince.Items.Add(String.Empty);            foreach (Province pr in prov)            {                ddlProvince.Items.Add(new ListItem(pr.ProvinceName, pr.ProvinceId.ToString()));            }        }        else        {            ddlProvince.Items.Clear();        }    }    protected void btnImHere_Click(object sender, EventArgs e)    {        int i = 0;        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value),            Lon = Convert.ToDouble(Longitude.Value);        List<Place> lp = da.NearByLocations(Lat, Lon);        foreach (Place p in lp)        {            ListItem li = new ListItem(p.Name, p.PlaceId.ToString());            if (i == 0)            {                li.Selected = true;            }            ddlPlace.Items.Add(li);            i++;        }        pnlAddPlace.Visible = false;        pnlImHere.Visible = true;    }    protected void lbImHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        ListViewItem lvi = (ListViewItem)(((LinkButton)sender).Parent);        HiddenField hd = (HiddenField)lvi.FindControl("hdPlaceId");        long PlaceId = Convert.ToInt64(hd.Value);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        Place pl = da.GetPlace(PlaceId);        pnlImHereNow.Visible = true;        pnlPlaces.Visible = false;        hdPlaceId.Value = PlaceId.ToString();        hdPlaceLat.Value = pl.Latitude.ToString();        hdPlaceLon.Value = pl.Longitude.ToString();        hdPlaceTitle.Value = pl.Name;        lblPlaceTitle.Text = pl.Name;    }    protected void btnHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        string WhatsH = txtImHere.Text;        long PlaceId = Convert.ToInt64(ddlPlace.SelectedValue);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsH,            dLatitude, dLongitude);    }    protected void btnLocalCoupons_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();     }    protected void lbBusiness_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        GridViewRow gvr = (GridViewRow)(((LinkButton)sender).Parent.Parent);        HiddenField hd = (HiddenField)gvr.FindControl("hdPlaceId");        string sPlaceId = hd.Value;        Int64 PlaceId;        if (!String.IsNullOrEmpty(sPlaceId))        {            PlaceId = Convert.ToInt64(sPlaceId);        }    }    protected void lbLocalDeals_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        pnlCoupons.Visible = true;        pnlStart.Visible = false;        List<GeoPromotion> lgp = da.NearByDeals(dLatitude, dLongitude);        lvCoupons.DataSource = lgp;        lvCoupons.DataBind();    }    protected void lbLocalPlaces_Click(object sender, EventArgs e)    {        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value);        double Lon = Convert.ToDouble(Longitude.Value);        List<LocationDataModel.Place> places = da.NearByLocations(Lat, Lon);        lvPlaces.DataSource = places;        lvPlaces.SelectedIndex = -1;        lvPlaces.DataBind();        pnlPlaces.Visible = true;        pnlStart.Visible = false;    }    protected void ReturnFromPlaces_Click(object sender, EventArgs e)    {        pnlPlaces.Visible = false;        pnlStart.Visible = true;    }    protected void ReturnFromDeals_Click(object sender, EventArgs e)    {        pnlCoupons.Visible = false;        pnlStart.Visible = true;    }    protected void btnImHereNow_Click(object sender, EventArgs e)    {        long PlaceId = Convert.ToInt32(hdPlaceId.Value);        string UserName = Membership.GetUser().UserName;        string WhatsHappening = txtWhatsHappening.Text;        double UserLat = Convert.ToDouble(Latitude.Value);        double UserLon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsHappening,             UserLat, UserLon);    }    protected void lbImHereNowReturn_Click(object sender, EventArgs e)    {        pnlImHereNow.Visible = false;        pnlPlaces.Visible = true;    }    protected void lbBackToBeginning_Click(object sender, EventArgs e)    {        pnlStart.Visible = true;        pnlImHereNow.Visible = false;    }    protected void lbWhereIveBeen_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        pnlStart.Visible = false;        pnlIveBeenHere.Visible = true;        DataAccess da = new DataAccess();        lvWhereIveBeen.DataSource = da.UserATs(UserName, 0, 15);        lvWhereIveBeen.DataBind();    }    protected void lbIveBeenHereBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = false;        pnlStart.Visible = true;    }     protected void lbPlaceIveBeen_Click(object sender, EventArgs e)    {        LinkButton lb = (LinkButton)sender;        ListViewItem lvi = (ListViewItem)lb.Parent.Parent;        HiddenField hdATID = (HiddenField)lvi.FindControl("hdATID");        Int64 ATID = Convert.ToInt64(hdATID.Value);        DataAccess da = new DataAccess();        pnlIveBeenHere.Visible = false;        pnlPlaceIveBeen.Visible = true;        var plac = da.GetPlaceViaATID(ATID);        hdPlaceIveBeenPlaceId.Value = plac.PlaceId.ToString();        hdPlaceIveBeenLatitude.Value = plac.Latitude.ToString();        hdPlaceIveBeenLongitude.Value = plac.Longitude.ToString();        lblPlaceIveBeenName.Text = plac.Name;        lblPlaceIveBeenAddress.Text = plac.Address1;        lblPlaceIveBeenCity.Text = plac.City;        lblPlaceIveBeenState.Text = plac.Province.ProvinceName;        lblPlaceIveBeenZipCode.Text = plac.ZipCode;        lblPlaceIveBeenCountry.Text = plac.Country.Name;    }     protected void lbNotListed_Click(object sender, EventArgs e)    {        SetupAddPoint();        pnlPlaces.Visible = false;    }     protected void lbAddAPlace_Click(object sender, EventArgs e)    {        SetupAddPoint();    }     private void SetupAddPoint()    {        double lat = Convert.ToDouble(Latitude.Value);        double lon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        var zip = da.WhereAmIAt(lat, lon);        if (zip.Count > 0)        {            var z0 = zip[0];            txtCity.Text = z0.City;            txtZipCode.Text = z0.ZipCode;            ddlProvince.ClearSelection();            if (z0.ProvinceId.HasValue == true)            {                foreach (ListItem li in ddlProvince.Items)                {                    if (li.Value == z0.ProvinceId.Value.ToString())                    {                        li.Selected = true;                        break;                    }                }            }        }        pnlAddPlace.Visible = true;        pnlStart.Visible = false;    }    protected void lbAddPlaceReturn_Click(object sender, EventArgs e)    {        pnlAddPlace.Visible = false;        pnlStart.Visible = true;        liPlaceAddMessage.Visible = false;        PlaceAddMessage.Text = String.Empty;    }    protected void lbPlaceIveBeenBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = true;        pnlPlaceIveBeen.Visible = false;            }    protected void lbPlaceIveBeenBeginning_Click(object sender, EventArgs e)    {        pnlPlaceIveBeen.Visible = false;        pnlStart.Visible = true;    }    protected string DisplayName(object val)    {        string strVal = Convert.ToString(val);         if (AppleIPhone.IsIPad())        {            ipd = iPhoneDevice.iPad;        }        if (AppleIPhone.IsIPhone())        {            ipd = iPhoneDevice.iPhone;        }        if (AppleIPhone.IsIPodTouch())        {            ipd = iPhoneDevice.iPodTouch;        }        return (iPhoneHelper.DisplayContentOnMenu(strVal, ipd));    }} iPhoneHelper.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; public enum iPhoneDevice{    iPhone, iPodTouch, iPad}/// <summary>/// Summary description for iPhoneHelper/// </summary>/// public class iPhoneHelper{ public iPhoneHelper() {  //  // TODO: Add constructor logic here  // } // This code is stupid in retrospect. Use css to solve this problem      public static string DisplayContentOnMenu(string val, iPhoneDevice ipd)    {        string Return = val;        string Elipsis = "...";        int iPadMaxLength = 30;        int iPhoneMaxLength = 15;        if (ipd == iPhoneDevice.iPad)        {            if (Return.Length > iPadMaxLength)            {                Return = Return.Substring(0, iPadMaxLength - Elipsis.Length) + Elipsis;            }        }        else        {            if (Return.Length > iPhoneMaxLength)            {                Return = Return.Substring(0, iPhoneMaxLength - Elipsis.Length) + Elipsis;            }        }        return (Return);    }}  Source code for the ViewStatePage: using System;using System.Data;using System.Data.SqlClient;using System.Configuration;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls; /// <summary>/// Summary description for BasePage/// </summary>#region Base class for a page.public class ViewStatePage : System.Web.UI.Page{     PageStatePersisterToDatabase myPageStatePersister;        public ViewStatePage()        : base()    {        myPageStatePersister = new PageStatePersisterToDatabase(this);    }     protected override PageStatePersister PageStatePersister    {        get        {            return myPageStatePersister;        }    } }#endregion #region This class will override the page persistence to store page state in a database.public class PageStatePersisterToDatabase : PageStatePersister{    private string ViewStateKeyField = "__VIEWSTATE_KEY";    private string _exNoConnectionStringFound = "No Database Configuration information is in the web.config.";     public PageStatePersisterToDatabase(Page page)        : base(page)    {    }     public override void Load()    {         // Get the cache key from the web form data        System.Int64 key = Convert.ToInt64(Page.Request.Params[ViewStateKeyField]);         Pair state = this.LoadState(key);         // Abort if cache object is not of type Pair        if (state == null)            throw new ApplicationException("Missing valid " + ViewStateKeyField);         // Set view state and control state        ViewState = state.First;        ControlState = state.Second;    }     public override void Save()    {         // No processing needed if no states available        if (ViewState == null && ControlState != null)            return;         System.Int64 key;        IStateFormatter formatter = this.StateFormatter;        Pair statePair = new Pair(ViewState, ControlState);         // Serialize the statePair object to a string.        string serializedState = formatter.Serialize(statePair);         // Save the ViewState and get a unique identifier back.        key = SaveState(serializedState);         // Register hidden field to store cache key in        // Page.ClientScript does not work properly with Atlas.        //Page.ClientScript.RegisterHiddenField(ViewStateKeyField, key.ToString());        ScriptManager.RegisterHiddenField(this.Page, ViewStateKeyField, key.ToString());    }     private System.Int64 SaveState(string PageState)    {        System.Int64 i64Key = 0;        string strConn = String.Empty,            strProvider = String.Empty;         string strSql = "insert into tblPageState ( SerializedState ) values ( '" + SqlEscape(PageState) + "');select scope_identity();";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);            sqlCn.Open();            i64Key = Convert.ToInt64(sqlCm.ExecuteScalar());            if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return i64Key;    }     private Pair LoadState(System.Int64 iKey)    {        string strConn = String.Empty,            strProvider = String.Empty,            SerializedState = String.Empty,            strMinutesInPast = GetMinutesInPastToDelete();        Pair PageState;        string strSql = "select SerializedState from tblPageState where tblPageStateID=" + iKey.ToString() + ";" +            "delete from tblPageState where DateUpdated<DateAdd(mi, " + strMinutesInPast + ", getdate());";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);             sqlCn.Open();            SerializedState = Convert.ToString(sqlCm.ExecuteScalar());            IStateFormatter formatter = this.StateFormatter;             if ((null == SerializedState) ||                (String.Empty == SerializedState))            {                throw (new ApplicationException("No ViewState records were returned."));            }             // Deserilize returns the Pair object that is serialized in            // the Save method.            PageState = (Pair)formatter.Deserialize(SerializedState);             if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return PageState;    }     private string SqlEscape(string Val)    {        string ReturnVal = String.Empty;        if (null != Val)        {            ReturnVal = Val.Replace("'", "''");        }        return (ReturnVal);    }    private void GetDBConnectionString(ref string ConnectionStringValue, ref string ProviderNameValue)    {        if (System.Configuration.ConfigurationManager.ConnectionStrings.Count > 0)        {            ConnectionStringValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString;            ProviderNameValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ProviderName;        }        else        {            throw new ConfigurationErrorsException(_exNoConnectionStringFound);        }    }    private string GetMinutesInPastToDelete()    {        string strReturn = "-60";        if (null != System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"])        {            strReturn = System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"].ToString();        }        return (strReturn);    }}#endregion AppleiPhone.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; /// <summary>/// Summary description for AppleIPhone/// </summary>public class AppleIPhone{ public AppleIPhone() {  //  // TODO: Add constructor logic here  // }     static public bool IsIPhoneOS()    {        return (IsIPad() || IsIPhone() || IsIPodTouch());    }     static public bool IsIPhone()    {        return IsTest("iPhone");    }     static public bool IsIPodTouch()    {        return IsTest("iPod");    }     static public bool IsIPad()    {        return IsTest("iPad");    }     static private bool IsTest(string Agent)    {        bool bl = false;        string ua = HttpContext.Current.Request.UserAgent.ToLower();        try        {            bl = ua.Contains(Agent.ToLower());        }        catch { }        return (bl);        }} Master page .cs: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls; public partial class MasterPages_iPhoneMaster : System.Web.UI.MasterPage{    protected void Page_Load(object sender, EventArgs e)    {            HtmlHead head = Page.Header;            HtmlMeta meta = new HtmlMeta();            if (AppleIPhone.IsIPad() == true)            {                meta.Content = "width=400,user-scalable=no";                head.Controls.Add(meta);             }            else            {                meta.Content = "width=device-width, user-scalable=no";                meta.Attributes.Add("name", "viewport");            }            meta.Attributes.Add("name", "viewport");            head.Controls.Add(meta);            HtmlLink cssLink = new HtmlLink();            HtmlGenericControl script = new HtmlGenericControl("script");            script.Attributes.Add("type", "text/javascript");            script.Attributes.Add("src", ResolveUrl("~/Scripts/iWebKit/javascript/functions.js"));            head.Controls.Add(script);            cssLink.Attributes.Add("rel", "stylesheet");            cssLink.Attributes.Add("href", ResolveUrl("~/Scripts/iWebKit/css/style.css") );            cssLink.Attributes.Add("type", "text/css");            head.Controls.Add(cssLink);            HtmlGenericControl jsLink = new HtmlGenericControl("script");            //jsLink.Attributes.Add("type", "text/javascript");            //jsLink.Attributes.Add("src", ResolveUrl("~/Scripts/jquery-1.4.1.min.js") );            //head.Controls.Add(jsLink);            HtmlLink appleIcon = new HtmlLink();            appleIcon.Attributes.Add("rel", "apple-touch-icon");            appleIcon.Attributes.Add("href", ResolveUrl("~/apple-touch-icon.png"));            HtmlMeta appleMobileWebAppStatusBarStyle = new HtmlMeta();            appleMobileWebAppStatusBarStyle.Attributes.Add("name", "apple-mobile-web-app-status-bar-style");            appleMobileWebAppStatusBarStyle.Attributes.Add("content", "black");            head.Controls.Add(appleMobileWebAppStatusBarStyle);    }     internal string FindPath(string Location)    {        string Url = Server.MapPath(Location);        return (Url);    }}

    Read the article

  • Unable to setup postgres on ubuntu (8.4 on 9.10)

    - by shabda
    I am trying to setup postgres on ubuntu, and I cant proceed as I cant find the location of pg_hba.conf (Postgres 8.4 on ubuntu 9.10) What I did setup postgres via aptitude This gave me ... /usr/share/postgresql/8.4# aptitude install postgresql Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libreadline5{a} postgresql postgresql-8.4{a} postgresql-client-8.4{a} postgresql-client-common{a} postgresql-common{a} 0 packages upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,159kB of archives. After unpacking 18.8MB will be used. Do you want to continue? [Y/n/?] Y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package libreadline5. (Reading database ... 17490 files and directories currently installed.) Unpacking libreadline5 (from .../libreadline5_5.2-6_amd64.deb) ... Selecting previously deselected package postgresql-client-common. Unpacking postgresql-client-common (from .../postgresql-client-common_101_all.deb) ... Selecting previously deselected package postgresql-client-8.4. Unpacking postgresql-client-8.4 (from .../postgresql-client-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql-common. Unpacking postgresql-common (from .../postgresql-common_101_all.deb) ... Selecting previously deselected package postgresql-8.4. Unpacking postgresql-8.4 (from .../postgresql-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql. Unpacking postgresql (from .../postgresql_8.4.1-1_all.deb) ... Processing triggers for man-db ... Setting up libreadline5 (5.2-6) ... Setting up postgresql-client-common (101) ... Setting up postgresql-client-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode. Setting up postgresql-common (101) ... Setting up postgresql-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode. Setting up postgresql (8.4.1-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done tried to login as su - postgres Followed by createdb mytestdb This failed with could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? So I though I need to enable local connections in pg_hba.conf, which I cant find. So I created a new file as sudo vim /etc/postgresql/8.4/main/pg_hba.conf With values local all all ident But I still cant login after restarting the service. What are next steps for me to take.

    Read the article

  • Can someone help me install MYSQL server pelase? This is bugging me....

    - by Alex
    $ sudo aptitude install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libhtml-template-perl{a} mysql-server mysql-server-5.0{a} mysql-server-core-5.0{a} 0 packages upgraded, 4 newly installed, 0 to remove and 12 not upgraded. Need to get 0B/27.7MB of archives. After unpacking 91.1MB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package mysql-server-core-5.0. (Reading database ... 17022 files and directories currently installed.) Unpacking mysql-server-core-5.0 (from .../mysql-server-core-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package mysql-server-5.0. Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.30really5.0.75-0ubuntu10.3_all.deb) ... Setting up mysql-server-core-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done Before I installed it, I ran this: sudo aptitude purge mysql-server mysql-server-5.0 This has happened before. I remember before, I did something with dpkg with fixed it.

    Read the article

  • wireless internet in linux is very very slow... but in windows.... everythnings fine

    - by Cody Acer
    yesterday when i was connecting to our neighbors wifi connection which is the signal strength is below 50%, i cant browse anything... even ping to gateway. 100% packet loss, and sometimes.. i can connect awesomely i can open my facebook account for 15 minutes but after 15min.. connection is extremely slow. but not windows i can surf even the signal str is very poor weird ey??.. root@Emely:~# lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx DMI Bridge [8086:a010] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a011] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller [0380]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a012] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 1 [8086:27d0] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 2 [8086:27d2] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 3 [8086:27d4] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 4 [8086:27d6] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.1 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.2 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.3 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.7 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller [8086:27cc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ehci-pci 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 SATA controller [0106]: Intel Corporation NM10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus [0c05]: Intel Corporation NM10/ICH7 Family SMBus Controller [8086:27da] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel modules: i2c-i801 05:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: bcma-pci-bridge Kernel modules: bcma 09:00.0 Ethernet controller [0200]: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller [11ab:4354] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: sky2 Kernel modules: sky2 root@Emely:~# ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.108/24 brd 192.168.1.255 scope global wlan0 inet6 fe80::21b:b1ff:fea9:ace0/64 scope link valid_lft forever preferred_lft forever root@Emely:~# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff root@Emely:~# rfkill list all 0: samsung-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: samsung-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no 3: phy0: Wireless LAN Soft blocked: no Hard blocked: no is this a wireless driver issue?

    Read the article

  • How to troubleshoot problem with OpenVpn Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenSvn Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • Is it possible to write C# code as below and send email using network in different country?

    - by kedar karthik
    Is it possible to write C# code as below and send email using mnetwork in different country? MSExchangeWebServiceURL = mail.something.com/ews/exchange.asmx its a web service URL ... sorry to correct my self //....this works great when i run the same code from home network, my friends home network ... anywhere around ... but when i run it from my clients location in columbia ... it fails I have a valid user name and password on that exchange server. Is there any configuration that I can set to achieve this? BTW this code below works when I run it within office network and any network within any home network ... i have tried atleast 5 friends network in Plano, Texas. I want this code to work when run from any network in another country. My client in columbia can connect to web service using a browser .. use the same user name and password ..... but when i run the code above ... it is not able to connect to our web service .... String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >