Search Results

Search found 13895 results on 556 pages for 'options'.

Page 441/556 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • How to remove all data that a website stores on a PC? [on hold]

    - by s.r.a
    I was a member in a computer forum website (sevenforums.com) for months. Two days ago I created a thread and many members participated in it, some of them asked me some irrelevant questions and I said “this question is irrelevant” and didn’t get them the answer. Thread finished and I could get the answer from another website and posted it to that website to arise members’ knowledge. Yesterday like every day I went to that website and faced a massage which was banning me, and my account was disabled. I shocked and I didn’t have even any option to appeal against that wrong decision. So I had to do something. And I did the following works: 1- I disconnected my Internet connection and cleared all the history data on the browser I use, the Google chrome. 2- I then ran “ccleaner” tool and marked almost all the options and then clicked on “run” button. Then it cleared all the data including the cookies. 3- I connected the machine (desktop) to the Internet and immediately changed my IP address. 4- I created a new Hotmail account and tried to register as a new member to that website (sevenforums.com). 5- I succeeded and my new account was enabled so I start to posting to that website. But unfortunately, after less than 1 minute I faced this message: “You are already banned”! My question is that, how they could know me again? How to create a new account without they know me? Thanks in advance.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • problem with tinymce textarea in dynamically added jquery tabs

    - by kranthi
    I have an aspx page(Default1.aspx),in which i have a static jquery tab and anchor tag upon clicking the anchor tag(Add Tab) I am adding new tab dynamically,which gets its contents loaded from another aspx page(Default2.aspx).This second page contains some text inside a tag,a textarea with 'tinymce' class which is placed inside a div with 'style="display:none" ' and this textarea gets displayed only upon clicking the edit button on that page. The HTML of Default1.aspx page looks like this. <head runat="server"> <title>Untitled Page</title> <script src="js/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="js/jquery-ui-1.7.2.custom.min.js" type="text/javascript"></script> <link href="css/custom-theme/jquery-ui-1.7.2.custom.css" rel="stylesheet" type="text/css" /> <link href="css/widgets.css" rel="stylesheet" type="text/css" /> <link href="css/print.css" rel="stylesheet" type="text/css" /> <link href="css/reset.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/tiny_mce/jquery.tinymce.js"></script> <script type="text/javascript"> $(function() { //DECLARE FUNCTION: removetab var removetab = function(tabselector, index) { $(".removetab").click(function(){ $(tabselector).tabs('remove',index); }); }; //create tabs $("#tabs").tabs({ add: function(event, ui) { //select newely opened tab $(this).tabs('select',ui.index); //load function to close tab removetab($(this), ui.index); }, show: function(event, ui) { if($.fn.tinymce) { $('textarea.tinymce').tinymce({ // Location of TinyMCE script script_url : 'js/tiny_mce/tiny_mce.js', // General options theme : "advanced", plugins : "safari,style,layer,table,advhr,advimage,advlink,inlinepopups,insertdatetime,preview,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template", // Theme options theme_advanced_buttons1 : "bold,italic,underline,strikethrough,|,bullist,numlist,|,justifyleft,justifycenter,justifyright,justifyfull,styleselect,formatselect,fontselect,fontsizeselect", theme_advanced_buttons2 : "outdent,indent,blockquote,|,undo,redo,|,link,unlink,anchor,image,cleanup,help,code,|,insertdate,inserttime,preview,|,forecolor,backcolor", theme_advanced_buttons3 : "sub,sup,|,ltr,rtl,|,fullscreen", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left" /*theme_advanced_statusbar_location : "bottom",*/ /*theme_advanced_resizing : true,*/ }); } //load function to close selected tabs removetab($(this), ui.index); } }); //load new tab $(".addtab").click(function(){ var href=$(this).attr("href"); var title=$(this).attr("title"); $("#tabs").tabs( 'add' , href , title+' <span class="removetab ui-icon ui-icon-circle-close" style="float:right; margin: -2px -10px 0px 3px; cursor:pointer;"></span>'); return false; }); }); function showEditFields(){ $('.edit').css('display','inline'); } </script> </head> <body> <form id="form1" runat="server"> <div> <a class="addtab" title="Tab Label" href="HTMLPage.htm">Add Tab</a> <div id="tabs"> <ul> <li><a href="#tabs-1">Default Tab</a></li> </ul> <div id="tabs-1"> <p>Etiam aliquet massa et lorem. Mauris dapibus lacus auctor risus. Aenean tempor ullamcorper leo. Vivamus sed magna quis ligula eleifend adipiscing. Duis orci. Aliquam Proin elit arcu, rutrum commodo, vehicula tempus, commodo a, risus. Curabitur nec arcu. Donec sollicitudin mi sit amet mauris. Nam elementum quam ullamcorper ante.sodales tortor vitae ipsum. Aliquam nulla. Duis aliquam molestie erat. Ut et mauris vel pede varius sollicitudin. Sed ut dolor nec orci tincidunt interdum. Phasellus ipsum. Nunc tristique tempus lectus.</p> </div> </div> </div> </form> </body> and the HTML of Default2.aspx looks like this. <head> </head> <body> <form id="form1" runat="server"> <div class="demo"> <p>Proin elit arcu, rutrum commodo, vehicula tempus, commodo a, risus. Curabitur nec arcu. Donec sollicitudin mi sit amet mauris. Nam elementum quam ullamcorper ante. Etiam aliquet massa et lorem. Mauris dapibus lacus auctor risus. Aenean tempor ullamcorper leo. Vivamus sed magna quis ligula eleifend adipiscing. Duis orci. Aliquam sodales tortor vitae ipsum. Aliquam nulla. Duis aliquam molestie erat. Ut et mauris vel pede varius sollicitudin. Sed ut dolor nec orci tincidunt interdum. Phasellus ipsum. Nunc tristique tempus lectus. <div class="edit" style="display:none"> <textarea style="height:80px; width:100%" class="tinymce" name="" rows="8" runat="server" id="txtans">answer text goes here </textarea> </div> <input id="Button1" type="button" value="edit" onclick="showEditFields();" /> </p> </form> </body> so when I click on the "edit" button available on Default2.aspx ,the textarea with tinymce should appear and I can add as many tabs as I want from Default1.aspx by clicking on Add Tab(anchor) which loads multiple tabs with content from Default2.aspx.After adding these multiple tabs ,if I check to see whether all the textareas are with tinymce,I noticed that only the 1st tab contains textarea with tinymce and in all the other tabs tinymce doesnt show up ,simply the normal text area appears. Could someone please help me with this? Thanks.

    Read the article

  • jQuery, array form radio button name problem.

    - by borayeris
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>click div to select hidden options</title> <script type="text/javascript" src="jquery-1.4.4.js"></script> <style type="text/css"> .clickDiv { width:50px; height:50px; cursor:crosshair; } .red {border:1px #000 solid;} .green {border:1px #000 solid;} .redBG {background:#F00;} .greenBG {background:#0F0;} </style> <script type="text/javascript"> $(function() { $('div.clickDiv.red').click(function(){ var secilenMadde=$(this).attr('madde'); $('div#write').text(secilenMadde); $('input[name='+secilenMadde+'][value=red]').attr('checked', 'checked'); $('div.clickDiv.red[madde='+secilenMadde+']').addClass('redBG'); $('div.clickDiv.green[madde='+secilenMadde+']').removeClass('greenBG'); }); $('div.clickDiv.green').click(function(){ var secilenMadde=$(this).attr('madde'); $('div#write').text(secilenMadde); $('input[name='+secilenMadde+'][value=green]').attr('checked', 'checked'); $('div.clickDiv.green[madde='+secilenMadde+']').addClass('greenBG'); $('div.clickDiv.red[madde='+secilenMadde+']').removeClass('redBG'); }); }); </script> </head> <body> <div id="write"></div> <form id="formId" name="formName" method="post"> <table> <tr> <td><div class="clickDiv red" madde="line1"></div></td> <td><div class="clickDiv green" madde="line1"></div></td> </tr> <tr> <td><div class="clickDiv red" madde="line2"></div></td> <td><div class="clickDiv green" madde="line2"></div></td> </tr> </table> <label for="line1red"><input id="line1red" type="radio" name="line1" value="red" /> Red</label> <label for="line1green"><input id="line1green" type="radio" name="line1" value="green" /> Green</label><br /> <label for="line2red"><input type="radio" name="line2" value="red" /> Red</label> <label for="line2green"><input type="radio" name="line2" value="green" /> Green</label> </form> </body> </html> This works. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>click div to select hidden options</title> <script type="text/javascript" src="jquery-1.4.4.js"></script> <style type="text/css"> .clickDiv { width:50px; height:50px; cursor:crosshair; } .red {border:1px #000 solid;} .green {border:1px #000 solid;} .redBG {background:#F00;} .greenBG {background:#0F0;} </style> <script type="text/javascript"> $(function() { $('div.clickDiv.red').click(function(){ var secilenMadde=$(this).attr('madde'); $('div#write').text(secilenMadde); $('input[name='+secilenMadde+'][value=red]').attr('checked', 'checked'); $('div.clickDiv.red[madde='+secilenMadde+']').addClass('redBG'); $('div.clickDiv.green[madde='+secilenMadde+']').removeClass('greenBG'); }); $('div.clickDiv.green').click(function(){ var secilenMadde=$(this).attr('madde'); $('div#write').text(secilenMadde); $('input[name='+secilenMadde+'][value=green]').attr('checked', 'checked'); $('div.clickDiv.green[madde='+secilenMadde+']').addClass('greenBG'); $('div.clickDiv.red[madde='+secilenMadde+']').removeClass('redBG'); }); }); </script> </head> <body> <div id="write"></div> <form id="formId" name="formName" method="post"> <table> <tr> <td><div class="clickDiv red" madde="line[1]"></div></td> <td><div class="clickDiv green" madde="line[1]"></div></td> </tr> <tr> <td><div class="clickDiv red" madde="line[2]"></div></td> <td><div class="clickDiv green" madde="line[2]"></div></td> </tr> </table> <label for="line1red"><input id="line1red" type="radio" name="line[1]" value="red" /> Red</label> <label for="line1green"><input id="line1green" type="radio" name="line[1]" value="green" /> Green</label><br /> <label for="line2red"><input type="radio" name="line[2]" value="red" /> Red</label> <label for="line2green"><input type="radio" name="line[2]" value="green" /> Green</label> </form> </body> </html> This doesn't. I need input names as an array but it breaks my script. Why?

    Read the article

  • How to autostart this slide

    - by lchales
    Hello there: first of all i have no idea on coding or anything related, simple question: is there any simple way to tell this code to autostart the slide? at the current moment the images change on click. currently the index page only have one image, what i want is to add a few but without the need to click to see the next one here is the code from my index: <script type="text/javascript"> //<![CDATA[ /* the images preload plugin */ (function($) { $.fn.preload = function(options) { var opts = $.extend({}, $.fn.preload.defaults, options), o = $.meta ? $.extend({}, opts, this.data()) : opts; var c = this.length, l = 0; return this.each(function() { var $i = $(this); $('<img/>').load(function(i){ ++l; if(l == c) o.onComplete(); }).attr('src',$i.attr('src')); }); }; $.fn.preload.defaults = { onComplete : function(){return false;} }; })(jQuery); //]]> </script><script type="text/javascript"> //<![CDATA[ $(function() { var $tf_bg = $('#tf_bg'), $tf_bg_images = $tf_bg.find('img'), $tf_bg_img = $tf_bg_images.eq(0), $tf_thumbs = $('#tf_thumbs'), total = $tf_bg_images.length, current = 0, $tf_content_wrapper = $('#tf_content_wrapper'), $tf_next = $('#tf_next'), $tf_prev = $('#tf_prev'), $tf_loading = $('#tf_loading'); //preload the images $tf_bg_images.preload({ onComplete : function(){ $tf_loading.hide(); init(); } }); //shows the first image and initializes events function init(){ //get dimentions for the image, based on the windows size var dim = getImageDim($tf_bg_img); //set the returned values and show the image $tf_bg_img.css({ width : dim.width, height : dim.height, left : dim.left, top : dim.top }).fadeIn(); //resizing the window resizes the $tf_bg_img $(window).bind('resize',function(){ var dim = getImageDim($tf_bg_img); $tf_bg_img.css({ width : dim.width, height : dim.height, left : dim.left, top : dim.top }); }); //expand and fit the image to the screen $('#tf_zoom').live('click', function(){ if($tf_bg_img.is(':animated')) return false; var $this = $(this); if($this.hasClass('tf_zoom')){ resize($tf_bg_img); $this.addClass('tf_fullscreen') .removeClass('tf_zoom'); } else{ var dim = getImageDim($tf_bg_img); $tf_bg_img.animate({ width : dim.width, height : dim.height, top : dim.top, left : dim.left },350); $this.addClass('tf_zoom') .removeClass('tf_fullscreen'); } } ); //click the arrow down, scrolls down $tf_next.bind('click',function(){ if($tf_bg_img.is(':animated')) return false; scroll('tb'); }); //click the arrow up, scrolls up $tf_prev.bind('click',function(){ if($tf_bg_img.is(':animated')) return false; scroll('bt'); }); //mousewheel events - down / up button trigger the scroll down / up $(document).mousewheel(function(e, delta) { if($tf_bg_img.is(':animated')) return false; if(delta > 0) scroll('bt'); else scroll('tb'); return false; }); //key events - down / up button trigger the scroll down / up $(document).keydown(function(e){ if($tf_bg_img.is(':animated')) return false; switch(e.which){ case 38: scroll('bt'); break; case 40: scroll('tb'); break; } }); } //show next / prev image function scroll(dir){ //if dir is "tb" (top -> bottom) increment current, //else if "bt" decrement it current = (dir == 'tb')?current + 1:current - 1; //we want a circular slideshow, //so we need to check the limits of current if(current == total) current = 0; else if(current < 0) current = total - 1; //flip the thumb $tf_thumbs.flip({ direction : dir, speed : 400, onBefore : function(){ //the new thumb is set here var content = '<span id="tf_zoom" class="tf_zoom"><\/span>'; content +='<img src="' + $tf_bg_images.eq(current).attr('longdesc') + '" alt="Thumb' + (current+1) + '"/>'; $tf_thumbs.html(content); } }); //we get the next image var $tf_bg_img_next = $tf_bg_images.eq(current), //its dimentions dim = getImageDim($tf_bg_img_next), //the top should be one that makes the image out of the viewport //the image should be positioned up or down depending on the direction top = (dir == 'tb')?$(window).height() + 'px':-parseFloat(dim.height,10) + 'px'; //set the returned values and show the next image $tf_bg_img_next.css({ width : dim.width, height : dim.height, left : dim.left, top : top }).show(); //now slide it to the viewport $tf_bg_img_next.stop().animate({ top : dim.top },700); //we want the old image to slide in the same direction, out of the viewport var slideTo = (dir == 'tb')?-$tf_bg_img.height() + 'px':$(window).height() + 'px'; $tf_bg_img.stop().animate({ top : slideTo },700,function(){ //hide it $(this).hide(); //the $tf_bg_img is now the shown image $tf_bg_img = $tf_bg_img_next; //show the description for the new image $tf_content_wrapper.children() .eq(current) .show(); }); //hide the current description $tf_content_wrapper.children(':visible') .hide() } //animate the image to fit in the viewport function resize($img){ var w_w = $(window).width(), w_h = $(window).height(), i_w = $img.width(), i_h = $img.height(), r_i = i_h / i_w, new_w,new_h; if(i_w > i_h){ new_w = w_w; new_h = w_w * r_i; if(new_h > w_h){ new_h = w_h; new_w = w_h / r_i; } } else{ new_h = w_w * r_i; new_w = w_w; } $img.animate({ width : new_w + 'px', height : new_h + 'px', top : '0px', left : '0px' },350); } //get dimentions of the image, //in order to make it full size and centered function getImageDim($img){ var w_w = $(window).width(), w_h = $(window).height(), r_w = w_h / w_w, i_w = $img.width(), i_h = $img.height(), r_i = i_h / i_w, new_w,new_h, new_left,new_top; if(r_w > r_i){ new_h = w_h; new_w = w_h / r_i; } else{ new_h = w_w * r_i; new_w = w_w; } return { width : new_w + 'px', height : new_h + 'px', left : (w_w - new_w) / 2 + 'px', top : (w_h - new_h) / 2 + 'px' }; } }); //]]> </script>

    Read the article

  • Android - doInBackground() error in AsyncTask

    - by AimanB
    What my app here basically does is it captures a photo or import from gallery, and when the Upload button is pressed, the image will be uploaded to a localhost server. Before I implemented AsyncTask into the process, it doesn't have any problem uploading whatsoever. Now that I've put AsyncTask, everything went wrong. I don't know which part that I do wrong in this phase. This is what logcat shows when I try to upload an image file: 10-28 17:23:25.989: E/AndroidRuntime(3356): FATAL EXCEPTION: AsyncTask #5 10-28 17:23:25.989: E/AndroidRuntime(3356): java.lang.RuntimeException: An error occured while executing doInBackground() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$3.done(AsyncTask.java:299) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.setException(FutureTask.java:219) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:239) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.lang.Thread.run(Thread.java:856) 10-28 17:23:25.989: E/AndroidRuntime(3356): Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:197) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:111) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast$TN.<init>(Toast.java:324) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.<init>(Toast.java:91) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.makeText(Toast.java:238) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:268) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:1) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$2.call(AsyncTask.java:287) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:234) This is my code for the Upload activity: public class UploadImageActivity extends Activity implements OnItemSelectedListener { InputStream inputStream; private ImageView imageView; String the_string_response; private static final int SELECT_PICTURE = 0; private static final int CAMERA_REQUEST = 1888; private static final String SERVER_UPLOAD_URI = "...myserver.php"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_upload_image); imageView = (ImageView) findViewById(R.id.imgUpload); } public void capturePhoto(View view) { Intent cameraIntent = new Intent( android.provider.MediaStore.ACTION_IMAGE_CAPTURE); File f = new File(android.os.Environment.getExternalStorageDirectory(), "temp.jpg"); cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(f)); startActivityForResult(cameraIntent, CAMERA_REQUEST); } public void pickPhoto(View view) { // TODO: launch the photo picker Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select Picture"), SELECT_PICTURE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { File f = new File(Environment.getExternalStorageDirectory() .toString()); for (File temp : f.listFiles()) { if (temp.getName().equals("temp.jpg")) { f = temp; break; } } try { BitmapFactory.Options bitmapOptions = new BitmapFactory.Options(); Bitmap bitmap = BitmapFactory.decodeFile(f.getAbsolutePath(), bitmapOptions); imageView.setImageBitmap(bitmap); String path = android.os.Environment .getExternalStorageDirectory() + File.separator + "Phoenix" + File.separator + "default"; f.delete(); OutputStream outFile = null; File file = new File(path, String.valueOf(System .currentTimeMillis()) + ".jpg"); try { outFile = new FileOutputStream(file); bitmap.compress(Bitmap.CompressFormat.JPEG, 85, outFile); outFile.flush(); outFile.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } catch (Exception e) { e.printStackTrace(); } } if (requestCode == SELECT_PICTURE && resultCode == RESULT_OK) { Bitmap bitmap = getPath(data.getData()); imageView.setImageBitmap(bitmap); } } private Bitmap getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(uri, projection, null, null, null); int column_index = cursor.getColumnIndexOrThrow(projection[0]); cursor.moveToFirst(); String filePath = cursor.getString(column_index); cursor.close(); // Convert file path into bitmap image using below line. Bitmap bitmap = BitmapFactory.decodeFile(filePath); return bitmap; } public void uploadPhoto(View view) { try { executeMultipartPost(); } catch (Exception e) { e.printStackTrace(); } } public void executeMultipartPost() throws Exception { class execMultiPostAsync extends AsyncTask<String, Void, String>{ @Override protected String doInBackground(String... params){ // Choose image here BitmapDrawable drawable = (BitmapDrawable) imageView.getDrawable(); Bitmap bitmap = drawable.getBitmap(); ByteArrayOutputStream stream = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream); // compress to // which // format // you want. byte[] byte_arr = stream.toByteArray(); String image_str = Base64.encodeBytes(byte_arr); ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("image", image_str)); try { HttpClient httpclient = new DefaultHttpClient(); /* * HttpPost(parameter): Server URI */ HttpPost httppost = new HttpPost(SERVER_UPLOAD_URI); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); the_string_response = convertResponseToString(response); } catch (Exception e) { Toast.makeText(UploadImageActivity.this, "ERROR " + e.getMessage(), Toast.LENGTH_LONG).show(); System.out.println("Error in http connection " + e.toString()); } return the_string_response; } @Override protected void onPostExecute(String result) { super.onPostExecute(result); Toast.makeText(UploadImageActivity.this, "Response " + result, Toast.LENGTH_LONG) .show(); } public String convertResponseToString(HttpResponse response) throws IllegalStateException, IOException { String res = ""; StringBuffer buffer = new StringBuffer(); inputStream = response.getEntity().getContent(); int contentLength = (int) response.getEntity().getContentLength(); // getting // content // lengt Toast.makeText(UploadImageActivity.this, "contentLength : " + contentLength, Toast.LENGTH_LONG).show(); if (contentLength < 0) { } else { byte[] data = new byte[512]; int len = 0; try { while (-1 != (len = inputStream.read(data))) { buffer.append(new String(data, 0, len)); // converting to // string and // appending to // stringbuffer } } catch (IOException e) { e.printStackTrace(); } try { inputStream.close(); // closing the stream } catch (IOException e) { e.printStackTrace(); } res = buffer.toString(); // converting stringbuffer to string Toast.makeText(UploadImageActivity.this, "Result : " + res, Toast.LENGTH_LONG).show(); // System.out.println("Response => " + // EntityUtils.toString(response.getEntity())); } return res; } } execMultiPostAsync exec = new execMultiPostAsync(); exec.execute(); } } Can someone please check if I put the AsyncTask task correctly in this activity? I think I've made a mistake somewhere.

    Read the article

  • Byte array serialization in JSON.NET

    - by Daniel Earwicker
    Given this simple class: class HasBytes { public byte[] Bytes { get; set; } } I can round-trip it through JSON using JSON.NET such that the byte array is base-64 encoded: var bytes = new HasBytes { Bytes = new byte[] { 1, 2, 3, 4 } }; // turn it into a JSON string var json = JsonConvert.SerializeObject(bytes); // get back a new instance of HasBytes var result1 = JsonConvert.DeserializeObject<HasBytes>(json); // all is well Debug.Assert(bytes.Bytes.SequenceEqual(result1.Bytes)); But if I deserialize this-a-wise: var result2 = (HasBytes)new JsonSerializer().Deserialize( new JTokenReader( JToken.ReadFrom(new JsonTextReader( new StringReader(json)))), typeof(HasBytes)); ... it throws an exception, "Expected bytes but got string". What other options/flags/whatever would need to be added to the "complicated" version to make it properly decode the base-64 string to initialize the byte array? Obviously I'd prefer to use the simple version but I'm trying to work with a CouchDB wrapper library called Divan, which sadly uses the complicated version, with the responsibilities for tokenizing/deserializing widely separated, and I want to make the simplest possible patch to how it currently works.

    Read the article

  • Missing Edit Option on Silverlight 4 DataForm

    - by rip
    I’m trying out the Silverlight 4 beta DataForm control. I don’t seem to be able to get the edit and paging options at the top of the control like I’ve seen in Silverlight 3 examples. Has something changed or am I doing something wrong? Here’s my code: <UserControl x:Class="SilverlightApplication7.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400" xmlns:dataFormToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.DataForm.Toolkit"> <Grid x:Name="LayoutRoot" Background="White"> <dataFormToolkit:DataForm HorizontalAlignment="Left" Margin="10" Name="myDataForm" VerticalAlignment="Top" /> </Grid> </UserControl> public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); this.Loaded += new RoutedEventHandler(MainPage_Loaded); } void MainPage_Loaded(object sender, RoutedEventArgs e) { Movie movie = new Movie(); myDataForm.CurrentItem = movie; } public enum Genres { Comedy, Fantasy, Drama, Thriller } public class Movie { public int MovieID { get; set; } public string Name { get; set; } public int Year { get; set; } public DateTime AddedOn { get; set; } public string Producer { get; set; } public Genres Genre { get; set; } } }

    Read the article

  • Socket.ReceiveAsync problem

    - by bartol
    Hi, I have a problem using SocketAsyncEventArgs model with .net sockets. Everything works great until the moment that the server wishes to close a client connection. I use following code for this: try { socket.Shutdown(SocketShutdown.Both); } catch { } // throws if client process has already closed finally { socket.Close(); } socket = null; Each connection is using two SocketAsyncEventArgs (one for send and one for receive) and after closing the connection they are returned to a pool from which they can be later reused. And here the problem starts, because when another connection is established and receive args are reused from the pool we get an exception: System.InvalidOperationException: "An asynchronous socket operation is already in progress using this SocketAsyncEventArgs instance."; at System.Net.Sockets.SocketAsyncEventArgs.StartOperationCommon(Socket socket) at System.Net.Sockets.Socket.ReceiveAsync(SocketAsyncEventArgs e) I've done some debugging and it appears that the connection closing code from the beginning of the question does not cancel Socket.ReceiveAsync operation that is in progress when the connection is closed. I've tried many combinations of Shutdown, Disconnect and Linger options for the socket but nothing worked. Any suggestions? Thanks

    Read the article

  • Cannot import the following keyfile: blah.pfx. The keyfile may be password protected.

    - by JasonD
    We just upgraded our Visual Studio 2008 projects to VS2010. All of our assemblies were strong signed using a Verisign code signing certificate. Since the upgrade we continuously get the following error: Cannot import the following key file: companyname.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_3E185446540E7F7A This happens on some developer machines and not others. Some methods used to fix this that worked some of the time include: re-installing the key file from Windows Explorer (right click on the PFX file and click Install) installing VS2010 on a fresh machine for the first time prompts you for the password the first time you open the project, and then it works. On machines upgraded from VS2008, you don't get this option. I've tried using the SN.EXE utility to register the key with the Strong Name CSP as the error message suggests, but whenever I run the tool with any options using the version that came with VS2010, SN.EXE just lists its command line arguments instead of doing anything. This happens regardless of what arguments I supply. Does anyone know WHY this is happening, and have clear steps to fix it? I'm about to give up on Click Once installs and Microsoft Code Signing. Thanks for any help!

    Read the article

  • What is a good alternative to the WPF WebBrowser Control?

    - by VoidDweller
    I have an MDI WPF app that I need to add web content to. At first, great it looks like I have 2 options built into the framework the Frame control and the WebBrowser control. Given that this is an MDI app it doesn't take long to discover that neither of these will work. The WPF WebBrowser control wraps up the IE WebBrowser ActiveX Control which uses the Win32 graphics pipeline. The "Airspace" issue pretty much sums this up as "Sorry, the layouts will not play nice together". Yes, I have thought about taking snapshots of the web content rendering these and mapping the mouse and keyboard events back to the browser control, but I can't afford the performance penalty and I really don't have time to write and thoroughly test it. I have looked for third party controls, but so far I have only found Chris Cavanagh's WPF Chromium Web Browser control. Which wraps up Awesomium 1.5. Together these are very cool, they play nice with the WPF layouts. But they do not meet my performance requirements. They are VERY HEAVY on memory consumption and not to friendly with CPU usage either. Not to mention still quite buggy. I'll elaborate if you are interested. So, do any of you know of a stable performant WPF web browser control? Thanks.

    Read the article

  • jqGrid dynamic select option - beforeEditCell not firing

    - by mango
    I'm creating a jqgrid with one drop down column. I need the options of the drop down columns to change dynamically so I thought I can catch the beforeCellEdit event. however it does not seem to be firing. any idea on what i am doing wrong? there is no error, and i did check that i have included the jqgrid edit js files. var lastsel2; jQuery(document).ready(function(){ jQuery("#projectList").jqGrid({ datatype: 'json', url:'projectDrv.jsp', mtype: 'GET', height: 250, colNames:['Node','Proposal #', 'Status', 'Vendor', 'Actions'], colModel :[ {name:'node', index:'node', width:100, editable:false, sortable:false}, {name:'proposal', index:'proposal', width:100, editable:false, resizable:true }, {name:'status', index:'status', width:100, resizable:true, sortable:false, editable:false }, {name:'vendor', index:'vendor', width:100, resizable:true, editable:false, sortable: false }, {name:'actions', index:'actions', width:100, resizable:true, sortable:false, editable: true, edittype:"select" } ], pager: '#pager', rowNum: 10, sortname: 'proposal', sortorder: 'desc', viewrecords: true, onSelectRow: function(id){ if (id && id!==lastsel2){ jQuery('#projectList').jqGrid('restoreRow',lastsel2); jQuery('#projectList').jqGrid('editRow',id,true); lastsel2 = id; } }, beforeEditCell: function(rowid, cellname, value, irow, icol) { alert("before edit here " + rowid); // set editoptions here } });

    Read the article

  • Weird vps server issue

    - by anon-user0
    I have an unmanaged linux vps Ubuntu 11.10 (Oneiric Ocelot). I have LNMP installed. Also php-fpm php-apc, varnish, memcache. I have (or rather had) several live sites on it. under normal load the server uses ~700 mb memory. But since last night its using only 20mb~ memory and a lot of the services seems to be down (according to htop) I only see nginx working and mysql starts up and goes does every few minutes on a loop. Here are some information on the server that might help you help me: root@server:~# uname -a Linux server 2.6.18-308.el5.028stab099.3 #1 SMP Wed Mar 7 15:56:00 MSK 2012 i686 i686 i386 GNU/Linux - root@server:~# ifconfig -a lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12515 errors:0 dropped:0 overruns:0 frame:0 TX packets:9541 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7191214 (7.1 MB) TX bytes:536726 (536.7 KB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:176.31.158.78 P-t-P:176.31.158.78 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 - root@server:~# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:http-alt *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp6 0 0 [::]:http-alt [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 9307368 @/com/ubuntu/upstart - htop: http://i.stack.imgur.com/NHKYX.png EDIT: Stressed. mind was not working adding log: root@server:~# less /var/log/syslog Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:39:01 server CRON[9298]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 05:40:01 server CRON[9463]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 05:46:21 server sm-msp-queue[9480]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:19:14, xdelay=00:06:18, mailer=relay, pri=122407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 05:52:39 server sm-msp-queue[9480]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:06:32, xdelay=00:06:18, mailer=relay, pri=842407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:00:01 server CRON[15671]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:06:22 server sm-msp-queue[15690]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:39:15, xdelay=00:06:18, mailer=relay, pri=212407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:09:01 server CRON[18114]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:12:40 server sm-msp-queue[15690]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:26:33, xdelay=00:06:18, mailer=relay, pri=932407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:20:02 server CRON[21888]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:26:22 server sm-msp-queue[21907]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:59:15, xdelay=00:06:18, mailer=relay, pri=302407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:27:02 server CRON[24021]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 06:32:40 server sm-msp-queue[21907]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:46:33, xdelay=00:06:18, mailer=relay, pri=1022407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:39:01 server CRON[27941]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:40:02 server CRON[28110]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:46:22 server sm-msp-queue[28125]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:19:15, xdelay=00:06:18, mailer=relay, pri=392407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:06:33, xdelay=00:06:18, mailer=relay, pri=1112407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: q5R2e4uo028125: sender notify: Warning: could not send message for past 4 hours Jun 27 06:52:44 server sm-msp-queue[28125]: q5R2e4uo028125: to=root, delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=33690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:00:02 server CRON[1543]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:06:21 server sm-msp-queue[1560]: q5R2e4uo028125: to=root, delay=00:13:41, xdelay=00:06:18, mailer=relay, pri=123690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:09:01 server CRON[3986]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 07:12:39 server sm-msp-queue[1560]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:45:32, xdelay=00:06:18, mailer=relay, pri=482407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:18:57 server sm-msp-queue[1560]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:32:50, xdelay=00:06:18, mailer=relay, pri=1202407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:20:02 server CRON[7760]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:26:22 server sm-msp-queue[7775]: q5R2e4uo028125: to=root, delay=00:33:42, xdelay=00:06:18, mailer=relay, pri=213690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:27:01 server CRON[9887]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 07:32:40 server sm-msp-queue[7775]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=02:05:33, xdelay=00:06:18, mailer=relay, pri=572407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:38:58 server sm-msp-queue[7775]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:52:51, xdelay=00:06:18, mailer=relay, pri=1292407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:39:01 server CRON[13813]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth : root@server:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 20G 2.3G 18G 12% / - Jun 26 16:22:41 server varnishd[1413]: Child (32425) died signal=3 Jun 26 16:22:41 server varnishd[1413]: child (21687) Started Jun 26 16:22:41 server varnishd[1413]: Child (21687) said Child starts Jun 26 16:22:41 server varnishd[1413]: Child (21687) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 26 16:34:28 server -- MARK -- Jun 26 16:54:29 server -- MARK -- Jun 26 17:14:29 server -- MARK -- Jun 26 17:34:29 server -- MARK -- Jun 26 17:54:29 server -- MARK -- Jun 26 18:14:29 server -- MARK -- Jun 26 18:34:29 server -- MARK -- Jun 26 18:54:29 server -- MARK -- Jun 26 19:14:29 server -- MARK -- Jun 26 19:34:29 server -- MARK -- Jun 26 19:54:29 server -- MARK -- Jun 26 20:14:29 server -- MARK -- Jun 26 20:34:29 server -- MARK -- Jun 26 20:48:12 server exiting on signal 15 Jun 26 20:51:58 server syslogd 1.5.0#6ubuntu1: restart. Jun 26 20:52:01 server varnishd[1324]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 26 21:11:58 server -- MARK -- Jun 26 21:31:58 server -- MARK -- Jun 26 21:51:58 server -- MARK -- Jun 26 22:11:58 server -- MARK -- Jun 26 22:31:58 server -- MARK -- Jun 26 22:51:58 server -- MARK -- Jun 26 23:11:58 server -- MARK -- Jun 26 23:31:58 server -- MARK -- Jun 26 23:51:58 server -- MARK -- Jun 27 00:11:58 server -- MARK -- Jun 27 00:23:42 server exiting on signal 15 Jun 27 02:21:10 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 02:21:12 server varnishd[1341]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 02:41:10 server -- MARK -- Jun 27 02:46:41 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:44 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:46 server varnishd[1238]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:20:46 server varnishd[1238]: child (1239) Started Jun 27 03:20:46 server varnishd[1238]: Child (1239) said Child starts Jun 27 03:20:46 server varnishd[1238]: Child (1239) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 27 03:32:52 server exiting on signal 15 Jun 27 03:33:16 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:33:31 server varnishd[1372]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:53:16 server -- MARK -- Jun 27 04:13:16 server -- MARK -- Jun 27 04:33:16 server -- MARK -- Jun 27 04:53:16 server -- MARK -- Jun 27 05:13:16 server -- MARK -- Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:53:17 server -- MARK -- Jun 27 06:13:17 server -- MARK -- Jun 27 06:33:17 server -- MARK -- Jun 27 06:53:17 server -- MARK -- Jun 27 07:13:17 server -- MARK -- Jun 27 07:33:17 server -- MARK -- Jun 27 07:53:17 server -- MARK -- Jun 27 08:13:17 server -- MARK -- Jun 27 08:33:17 server -- MARK -- Jun 27 08:53:17 server -- MARK -- Jun 27 09:13:17 server -- MARK -- Jun 27 09:33:17 server -- MARK -- Jun 27 09:53:17 server -- MARK -- Jun 27 10:13:17 server -- MARK -- Jun 27 10:33:17 server -- MARK -- Jun 27 10:53:17 server -- MARK -- Jun 27 11:13:17 server -- MARK -- Jun 27 11:33:17 server -- MARK -- Jun 27 11:53:18 server -- MARK -- Jun 27 12:13:18 server -- MARK -- Jun 27 12:33:18 server -- MARK -- Jun 27 12:53:18 server -- MARK -- Jun 27 13:13:18 server -- MARK -- Jun 27 13:33:18 server -- MARK -- Jun 27 13:53:18 server -- MARK -- Jun 27 14:13:18 server -- MARK -- Jun 27 14:33:18 server -- MARK -- Jun 27 14:53:18 server -- MARK -- -- root@server:~# cat /var/log/nginx/error.log 2012/06/27 03:32:54 [alert] 1199#0: worker process 1203 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1200 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1201 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1202 exited on signal 9 root@server:~# cat /var/log/nginx/access.log 31.210.99.87 - - [27/Jun/2012:09:09:08 +0400] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 172 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cms/cmx.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /iesvc/iesvc.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cmd2/index.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:09 +0400] "GET /cmd/index.jsp HTTP/1.1" 301 184 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:19 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:37 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:48 +0400] "-" 400 0 "-" "-" - root@server:~# cat /var/log/daemon.log Jun 26 20:48:10 server xinetd[1177]: Exiting... Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 26 20:51:58 server xinetd[1174]: Started working: 0 available services Jun 26 20:52:01 server vnstatd[1330]: vnStat daemon 1.11 started. Jun 26 20:52:01 server vnstatd[1330]: Monitoring: venet0 Jun 27 00:23:41 server xinetd[1174]: Exiting... Jun 27 02:21:12 server vnstatd[1349]: vnStat daemon 1.11 started. Jun 27 02:21:12 server vnstatd[1349]: Monitoring: venet0 Jun 27 03:20:44 server xinetd[1166]: attribute: disable should not be in default section [file=/etc/xinetd.conf] [line=12] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/chargen [file=/etc/xinetd.conf] [line=15] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 27 03:20:44 server xinetd[1166]: Started working: 0 available services Jun 27 03:20:46 server vnstatd[1249]: vnStat daemon 1.11 started. Jun 27 03:20:46 server vnstatd[1249]: Monitoring: venet0 Jun 27 03:32:41 server xinetd[1166]: Exiting... Jun 27 03:33:32 server vnstatd[1380]: vnStat daemon 1.11 started. Jun 27 03:33:32 server vnstatd[1380]: Monitoring: venet0 root@server:~# - Anything else you need let me know

    Read the article

  • Send large JSON data to WCF Rest Service

    - by Christo Fur
    Hi I have a client web page that is sending a large json object to a proxy service on the same domain as the web page. The proxy (an ashx handler) then forwards the request to a WCF Rest Service. Using a WebClient object (standard .net object for making a http request) The JSON successfully arrives at the proxy via a jQuery POST on the client webpage. However, when the proxy forwards this to the WCF service I get a Bad Request - Error 400 This doesn't happen when the size of the json data is small The WCF service contract looks like this [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.Wrapped, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] CarConfiguration CreateConfiguration(CarConfiguration configuration); And the DataContract like this [DataContract(Namespace = "")] public class CarConfiguration { [DataMember(Order = 1)] public int CarConfigurationId { get; set; } [DataMember(Order = 2)] public int UserId { get; set; } [DataMember(Order = 3)] public string Model { get; set; } [DataMember(Order = 4)] public string Colour { get; set; } [DataMember(Order = 5)] public string Trim { get; set; } [DataMember(Order = 6)] public string ThumbnailByteData { get; set; } [DataMember(Order = 6)] public string Wheel { get; set; } [DataMember(Order = 7)] public DateTime Date { get; set; } [DataMember(Order = 8)] public List<string> Accessories { get; set; } [DataMember(Order = 9)] public string Vehicle { get; set; } [DataMember(Order = 10)] public Decimal Price { get; set; } } When the ThumbnailByteData field is small, all is OK. When it is large I get the 400 error What are my options here? I've tried increasing the MaxBytesRecived config setting but that is not enough Any ideas?

    Read the article

  • new ActiveXObject('Word.Application') creates new winword.exe process when IE security does not allo

    - by Mark Ott
    We are using MS Word as a spell checker for a few fields on a private company web site, and when IE security settings are correct it works well. (Zone for the site set to Trusted, and trusted zone modified to allow control to run without prompting.) The script we are using creates a word object and closes it afterward. While the object exists, a winword.exe process runs, but it is destroyed when the word object is closed. If our site is not set in the trusted zone (Internet zone with default security level) the call that creates the word object fails as expected, but the winword.exe process is still created. I do not have any way to interact with this process in the script, so the process stays around until the user logs off (users have no way to manually destroy the process, and it wouldn't be a good solution even if they did.) The call that attempts to create the object is... try { oWordApplication = new ActiveXObject('Word.Application'); } catch(error) { // irrelevant code removed, described in comments.. // notify user spell check cannot be used // disable spell check option } So every time the page is loaded this code may be run again, creating yet another orphan winword.exe process. oWordApplication is, of course, undefined in the catch block. I would like to be able to detect the browser security settings beforehand, but I have done some searching on this and do not think that it is possible. Management here is happy with it as it is. As long as IE security is set correctly it works, and it works well for our purposes. (We may eventually look at other options for spell check functionality, but this was quick, inexpensive, and does everything we need it to do.) This last problem bugs me and I'd like to do something about it, but I'm out of ideas and I have other things that are more in need of my attention. Before I put it aside, I thought I'd ask for suggestions here...

    Read the article

  • Code sign error with Xcode 3.2

    - by quux
    I had a fully working build environment before upgrading to iPhone OS 3.1 and Xcode 3.2. Now when I try to do a build, i get the following: Code Sign error: Provisioning profile 'FooApp test' specifies the Application Identifier 'no.fooapp.iphoneapp' which doesn't match the current setting 'TGECMYZ3VK.no.fooapp.iphoneapp' The problem is that Xcode somehow manages to think that the "FooApp Test" provisioning profile specifies the Application Identifier "no.fooapp.iphoneapp", but this is not the case. In the Organizer (and in the iPhone developer portal website) the app identifier is correctly seen as 'TGECMYZ3VK.no.fooapp.iphoneapp'. Also, when setting the provisioning profile in the build options at the project level, Xcode correctly identifies the app identifier, but when I go to the target, I'm unable to select any valid provisioning profile. What could be causing this problem? Update: I've tried to create a new provisioning profile, but still no luck. I also tried simply changing the app identified in Info.plist to just "no.fooapp.iphoneapp". The build succeeds, but now I get an error from the Organizer: The executable was signed with invalid entitlements. The entitlements specified in your application's Code Signing Entitlements file do not match those specified in your provisioning profile. (0xE8008016). This seems reasonable, as the provisioning profile still has the "TGECMYZ3VK.no.fooapp.iphoneapp" application identifier. I also double checked that all certiicates are valid in the Keychain. So my question is how I can get Xcode to see the correct application identifier? UPDATE: As noted below, what seems to fix the problem is deleting all provisioning profiles, certificates etc, making new certificates / profiles and installing them again. If anyone has any other solutions, they would be welcome. :)

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • IIS7 Mixed Mode Authentication

    - by drachenstern
    We're getting ready to start migrating some of our IIS6 sites to IIS7, and the application currently uses Forms Authentication. We have started getting some requests from various sites to use the Windows Authentication for the users. While this is easy enough to implement (and I've shown internally that there is no issue with the app, as expected) the question then is how to continue to keep Forms authentication for when Integrated Windows doesn't work. I've seen several walkthroughs on how to have it configured on IIS6, and I could do the same thing on IIS7, but then I have to turn on Classic Mode processing. Any solution should also be back portable to IIS6, if possible, to keep the build tree simple. So what are my options on this? Do I setup the app with Integrated Windows Authentication in IIS7, Forms Auth in the web.config, and redirect 401 errors to an "error page" allowing them to login using forms, then back to the regular app? The case when Forms is likely to be needed is going to be reserved for Contract workers, our support staff, and if someone needs to access it on their site from their Extranet. So primarily it's for our staff to login to check functionality and confirm bug reports. I suggested we just maintain that for our support staff to work, we need a Windows login that will always be live, and then we'll just enforce local responsibility on who can login to the site, but I'm told that we would do better to have Forms Authentication. Any thoughts? I can post some of the links of the articles I've already read through if that would help the forum better narrow my needs. Many thanks. tl;dr: How to do mixed mode authentication (forms, windows) in IIS7 without changing to classic pipeline and still be able to use the build in IIS6 if possible.

    Read the article

  • How to Authenticate to Active Directory Services (ADs) using .NET 3.5 / C#

    - by Ranger Pretzel
    After much struggling, I've figured out how to authenticate to my company's Active Directory using just 2 lines of code with the Domain, Username, and Password in .NET 2.0 (in C#): // set domain, username, password, and security parameters DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, username, password, AuthenticationTypes.Secure | AuthenticationTypes.SecureSocketsLayer); // force Bind to AD server to authenticate object obj = entry.NativeObject; If the 2nd line throws an exception, then the credentials and/or parameters were bad. (Specific reason can be found in the exception.) If no exception, then the credentials are good. Trying to do this in .NET 3.5 looks like it should be easy, but has me at a roadblock instead. Specifically, I've been working with this example: PrincipalContext domainContext = new PrincipalContext(ContextType.Domain, domain); using (domainContext) { return domainContext.ValidateCredentials(UserName, Password); } Unfortunately, this doesn't work for me as I don't have both ContextOptions set to Sealed/Secure and SSL (like I did above in the .NET 2.0 code.) There is an alternate constructor for PrincipalContext that allows setting the ContextOptions, but this also requires supplying a Distinguished Name (DN) of a Container Object and I don't know exactly what mine is or how I would find out. public PrincipalContext(ContextType contextType, string name, string container, ContextOptions options); // container: // The container on the store to use as the root of the context. All queries // are performed under this root, and all inserts are performed into this container. // For System.DirectoryServices.AccountManagement.ContextType.Domain and System.DirectoryServices.AccountManagement.ContextType.ApplicationDirectory // context types, this parameter is the distinguished name of a container object. Any suggestions?

    Read the article

  • Haskell: 'No instance for' arising from a trivial usage of Regex library

    - by artemave
    Following the (accepted) answer from this question, I am expecting the following to work: Prelude Text.Regex.Posix Text.Regex.Base.RegexLike Text.Regex.Posix.String> makeRegex ".*" (makeRegex is a shortcut for makeRegexOpts with predefined options) However, it doesn't: <interactive>:1:0: No instance for (RegexMaker regex compOpt execOpt [Char]) arising from a use of `makeRegex' at <interactive>:1:0-13 Possible fix: add an instance declaration for (RegexMaker regex compOpt execOpt [Char]) In the expression: makeRegex ".*" In the definition of `it': it = makeRegex ".*" Prelude Text.Regex.Posix Text.Regex.Base.RegexLike Text.Regex.Posix.String> make Regex ".*"::Regex <interactive>:1:0: No instance for (RegexMaker Regex compOpt execOpt [Char]) arising from a use of `makeRegex' at <interactive>:1:0-13 Possible fix: add an instance declaration for (RegexMaker Regex compOpt execOpt [Char]) In the expression: makeRegex ".*" :: Regex In the definition of `it': it = makeRegex ".*" :: Regex And I really don't understand why. EDIT Haskell Platform 2009.02.02 (GHC 6.10.4) on Windows EDIT2 Prelude Text.Regex.Base.RegexLike Text.Regex.Posix.String> :i RegexMaker class (RegexOptions regex compOpt execOpt) => RegexMaker regex compOpt execOpt source | regex -> compOpt execOpt, compOpt -> regex execOpt, execOpt -> regex compOpt where makeRegex :: source -> regex makeRegexOpts :: compOpt -> execOpt -> source -> regex makeRegexM :: (Monad m) => source -> m regex makeRegexOptsM :: (Monad m) => compOpt -> execOpt -> source -> m regex -- Defined in Text.Regex.Base.RegexLike

    Read the article

  • ImportError: No module named QtWebKit

    - by Hallik
    I am on centos5. I installed python26 source with a make altinstall. Then I did a: yum install qt4 yum install qt4-devel yum install qt4-doc From riverbankcomputing.co.uk I downloaded the source for sip 4.10.2, compiled and installed fine. Then from the same site I downloaded and compiled from source PyQt-x11-4.7.3 Both installs were using the python26 version (/usr/local/bin/python2.6). So configure.py, make, and make install worked with no errors. Finally, I tried to run this script, but got the error in the subject of this post: import sys import signal from PyQt4.QtCore import * from PyQt4.QtGui import * from PyQt4.QtWebKit import QWebPage def onLoadFinished(result): if not result: print "Request failed" sys.exit(1) #screen = QtGui.QDesktopWidget().screenGeometry() size = webpage.mainFrame().contentsSize() # Set the size of the (virtual) browser window webpage.setViewportSize(webpage.mainFrame().contentsSize()) # Paint this frame into an image image = QImage(webpage.viewportSize(), QImage.Format_ARGB32) painter = QPainter(image) webpage.mainFrame().render(painter) painter.end() image.save("output2.png") sys.exit(0) app = QApplication(sys.argv) signal.signal(signal.SIGINT, signal.SIG_DFL) webpage = QWebPage() webpage.connect(webpage, SIGNAL("loadFinished(bool)"), onLoadFinished) webpage.mainFrame().load(QUrl("http://www.google.com")) sys.exit(app.exec_()) Even in the beginning of the configure for pyqt4, I saw it say QtWebKit should be installed, but apparently it's not? What's going on? I just did a find, and it looks like it wasn't installed. What are my options? [root@localhost ~]# find / -name '*QtWebKit*' /root/PyQt-x11-gpl-4.7.3/sip/QtWebKit /root/PyQt-x11-gpl-4.7.3/sip/QtWebKit/QtWebKitmod.sip /root/PyQt-x11-gpl-4.7.3/cfgtest_QtWebKit.cpp

    Read the article

  • System.Net.WebClient doesn't work with Windows Authentication

    - by Peter Hahndorf
    I am trying to use System.Net.WebClient in a WinForms application to upload a file to an IIS6 server which has Windows Authentication as it only 'Authentication' method. WebClient myWebClient = new WebClient(); myWebClient.Credentials = new System.Net.NetworkCredential(@"boxname\peter", "mypassword"); byte[] responseArray = myWebClient.UploadFile("http://localhost/upload.aspx", fileName); I get a 'The remote server returned an error: (401) Unauthorized', actually it is a 401.2 Both client and IIS are on the same Windows Server 2003 Dev machine. When I try to open the page in Firefox and enter the same correct credentials as in the code, the page comes up. However when using IE8, I get the same 401.2 error. Tried Chrome and Opera and they both work. I have 'Enable Integrated Windows Authentication' enabled in the IE Internet options. The Security Event Log has a Failure Audit: Logon Failure: Reason: An error occurred during logon User Name: peter Domain: boxname Logon Type: 3 Logon Process: ÈùÄ Authentication Package: NTLM Workstation Name: boxname Status code: 0xC000006D Substatus code: 0x0 Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 127.0.0.1 Source Port: 1476 I used Process Monitor and Fiddler to investigate but to no avail. Why would this work for 3rd party browsers but not with IE or System.Net.WebClient?

    Read the article

  • How to setup Mercurial central repository on shared hosting

    - by Metropolis
    Hey Everyone, I am trying to setup a central repository with shared hosting. I read all the way through this tutorial http://mercurial.selenic.com/wiki/PublishingRepositories to no avail. Here are the steps I took. 1. Copy hgwebdir.cgi file to directory at http://url.com/central_repository/hgwebdir.cgi 2. Added the following information to the hgweb.config file and copied it to same place. [paths] projectname = /home/username/central_repository/projectname [web] baseurl = /hg 3. Added the following to an htaccess file and copied it to the same place # Taken from http://www.pmwiki.org/wiki/Cookbook/CleanUrls#samedir # Used at http://ggap.sf.net/hg/ Options +ExecCGI RewriteEngine On #write base depending on where the base url lives RewriteBase /hg RewriteRule ^$ hgwebdir.cgi [L] # Send requests for files that exist to those files. RewriteCond %{REQUEST_FILENAME} !-f # Send requests for directories that exist to those directories. RewriteCond %{REQUEST_FILENAME} !-d # Send requests to hgwebdir.cgi, appending the rest of url. RewriteRule (.*) hgwebdir.cgi/$1 [QSA,L] 4. Uploaded the repository without the working directory to /home/user/central_repository/projectname 5. Tried to clone the repository to my computer using the folloing destination path: http://url.com/hg/projectname After going through these steps I get a 404: Not Found error. However if I change the destination path to http://url.com/central_repository/projectname It acts like it found the repository, It tells me it found the changesets, and it was adding the changesets and manifests, but then it says "transaction abort! HTTP Error 500: Internal Server Error. Thanks for any help! Metropolis

    Read the article

  • How to remove FacesMessages from the FacesContext?

    - by gurupriyan.e
    In my screen, I have a drop down(select box), on selection of any of the options in that drop down , i display one or more text boxes beside the select box using javascript/css - display:none and display:block. All these input controls are in the same jsf form. Each of the input controls have their own validator. The problem is suppose the user selects from selection box and doesn't input value or inputs a wrong value for , i add a custom FacesMessage in the Validator and is shown appropriately and suppose the user selects the the second time and inputs the wrong value for the then another FacesMessage is added in the validator. But Now both the Messages are shown - means - the message for and - which is wrong My assumption is that , this happens because they exist in the same form and their instances are not destroyed yet in the FacesContext and in the UIView. I decided to delete the messages this way Iterator<FacesMessage> msgIterator = FacesContext.getCurrentInstance().getMessages(); while(msgIterator.hasNext()) { msgIterator.next(); msgIterator.remove(); } But this sometimes gives java.util.NoSuchElementException org.apache.myfaces.shared_impl.renderkit.html.HtmlMessagesRendererBase$MessagesIterator.next So 2 questions : 1) What is the problem in deleting the FacesMessages this way? I am using myfaces-api-1.2.3.jar and myfaces-impl-1.2.3.jar 2) Is there a better approach to handle my scenario? I only want to show relevent messages every time a jsf request is processed Thanks

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >