Search Results

Search found 10349 results on 414 pages for 'frank developer'.

Page 414/414 | < Previous Page | 410 411 412 413 414 

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Something is making my page perform an Ajax call multiple times... [read: I've never been more frust

    - by Jack Webb-Heller
    NOTE: This is a long question. I've explained all the 'basics' at the top and then there's some further (optional) information for if you need it. Hi folks Basically last night this started happening at about 9PM whilst I was trying to restructure my code to make it a bit nicer for the designer to add a few bits to. I tried to fix it until 2AM at which point I gave up. Came back to it this morning, still baffled. I'll be honest with you, I'm a pretty bad Javascript developer. Since starting this project Javascript has been completely new to me and I've just learn as I went along. So please forgive me if my code structure is really bad (perhaps give a couple of pointers on how to improve it?). So, to the problem: to reproduce it, visit http://furnace.howcode.com (it's far from complete). This problem is a little confusing but I'd really appreciate the help. So in the second column you'll see three tabs The 'Newest' tab is selected by default. Scroll to the bottom, and 3 further results should be dynamically fetched via Ajax. Now click on the 'Top Rated' tab. You'll see all the results, but ordered by rating Scroll to the bottom of 'Top Rated'. You'll see SIX results returned. This is where it goes wrong. Only a further three should be returned (there are 18 entries in total). If you're observant you'll notice two 'blocks' of 3 returned. The first 'block' is the second page of results from the 'Newest' tab. The second block is what I just want returned. Did that make any sense? Never mind! So basically I checked this out in Firebug. What happens is, from a 'Clean' page (first load, nothing done) it calls ONE POST request to http://furnace.howcode.com/code/loadmore . But every time you load a new one of the tabs, it makes an ADDITIONAL POST request each time where there should normally only be ONE. So, can you help me? I'd really appreciate it! At this point you could start independent investigation or read on for a little further (optional) information. Thanks! Jack Further Info (may be irrelevant but here for reference): It's almost like there's some Javascript code or something being left behind that duplicates it each time. I thought it might be this code that I use to detect when the browser is scrolled to the bottom: var col = $('#col2'); col.scroll(function(){ if (col.outerHeight() == (col.get(0).scrollHeight - col.scrollTop())) loadMore(1); }); So what I thought was that code was left behind, and so every time you scroll #col2 (which contains different data for each tab) it detected that and added it for #newest as well. So, I made each tab click give #col2 a dynamic class - either .newestcol, .featuredcol, or .topratedcol. And then I changed the var col=$('.newestcol');dynamically so it would only detect it individually for each tab (makin' any sense?!). But hey, that didn't do anything. Another useful tidbit: here's the PHP for http://furnace.howcode.com/code/loadmore: $kind = $this->input->post('kind'); if ($kind == 1){ // kind is 1 - newest $start = $this->input->post('currentpage'); $data['query'] = "SELECT code.id AS codeid, code.title AS codetitle, code.summary AS codesummary, code.author AS codeauthor, code.rating AS rating, code.date, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid, GROUP_CONCAT(tags.tag SEPARATOR ', ') AS taggroup FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id GROUP BY code_id ORDER BY date DESC LIMIT $start, 15 "; $this->load->view('code/ajaxlist',$data); } elseif ($kind == 2) { // kind is 2 - featured So my jQuery code sends a variable 'kind'. If it's 1, it runs the query for Newest, etc. etc. The PHP code for furnace.howcode.com/code/ajaxlist is: <?php // Our query base // SELECT * FROM code ORDER BY date DESC $query = $this->db->query($query); foreach($query->result() as $row) { ?> <script type="text/javascript"> $('#title-<?php echo $row->codeid;?>').click(function() { var form_data = { id: <?php echo $row->codeid; ?> }; $('#col3').fadeOut('slow', function() { $.ajax({ url: "<?php echo site_url('code/viewajax');?>", type: 'POST', data: form_data, success: function(msg) { $('#col3').html(msg); $('#col3').fadeIn('fast'); } }); }); }); </script> <div class="result"> <div class="resulttext"> <div id="title-<?php echo $row->codeid; ?>" class="title"> <?php echo anchor('#',$row->codetitle); ?> </div> <div class="summary"> <?php echo $row->codesummary; ?> </div> <!-- Now insert the 5-star rating system --> <?php include($_SERVER['DOCUMENT_ROOT']."/fivestars/5star.php");?> <div class="bottom"> <div class="author"> Submitted by <?php echo anchor('auth/profile/'.$row->authorid,''.$row->authorname);?> </div> <?php // Now we need to take the GROUP_CONCATted tags and split them using the magic of PHP into seperate tags $tagarray = explode(", ", $row->taggroup); foreach ($tagarray as $tag) { ?> <div class="tagbutton" href="#"> <span><?php echo $tag; ?></span> </div> <?php } ?> </div> </div> </div> <?php } echo "&nbsp;";?> <script type="text/javascript"> var newpage = <?php echo $this->input->post('currentpage') + 15;?>; </script> So that's everything in PHP. The rest you should be able to view with Firebug or by viewing the Source code. I've put all the Tab/clicking/Ajaxloading bits in the tags at the very bottom. There's a comment before it all kicks off. Thanks so much for your help!

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Implementing a robust async stream reader

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output, but that is a detail and does not play an important role. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • Trying to randomise a jQuery content slider

    - by alecrust
    Hi everyone, I'm using a very nice jQuery content slider called Easy Slider on my site that I downloaded from Css Globe. The script is excellent and does just what I want - except I can't make it randomise the list, it always scrolls from left to right or right to left! I'm far from good with JavaScript, so my attempts at solving this have been feeble. Although I'm sure it must be an easy fix! If anyone wouldn't mind taking a glance over the script to see if they can spot what I need to change to make it random it would be greatly appreciated! I've tried contacting the original plugin developer but have had no response yet. The comments on the Easy Slider page didn't bear much fruit either unfortunately. I've pasted the script I'm using on my site below: /* * Easy Slider 1.7 - jQuery plugin * written by Alen Grakalic * http://cssglobe.com/post/4004/easy-slider-15-the-easiest-jquery-plugin-for-sliding * * Copyright (c) 2009 Alen Grakalic (http://cssglobe.com) * Dual licensed under the MIT (MIT-LICENSE.txt) * and GPL (GPL-LICENSE.txt) licenses. * * Built for jQuery library * http://jquery.com * */ (function($) { $.fn.easySlider = function(options){ // default configuration properties var defaults = { prevId: 'prevBtn', prevText: 'Previous', nextId: 'nextBtn', nextText: 'Next', controlsShow: true, controlsBefore: '', controlsAfter: '', controlsFade: true, firstId: 'firstBtn', firstText: 'First', firstShow: false, lastId: 'lastBtn', lastText: 'Last', lastShow: false, vertical: false, speed: 800, auto: false, pause: 7000, continuous: false, numeric: false, numericId: 'controls' }; var options = $.extend(defaults, options); this.each(function() { var obj = $(this); var s = $("li", obj).length; var w = $("li", obj).width(); var h = $("li", obj).height(); var clickable = true; obj.width(w); obj.height(h); obj.css("overflow","hidden"); var ts = s-1; var t = 0; $("ul", obj).css('width',s*w); if(options.continuous){ $("ul", obj).prepend($("ul li:last-child", obj).clone().css("margin-left","-"+ w +"px")); $("ul", obj).append($("ul li:nth-child(2)", obj).clone()); $("ul", obj).css('width',(s+1)*w); }; if(!options.vertical) $("li", obj).css('float','left'); if(options.controlsShow){ var html = options.controlsBefore; if(options.numeric){ html += '<ol id="'+ options.numericId +'"></ol>'; } else { if(options.firstShow) html += '<span id="'+ options.firstId +'"><a href=\"javascript:void(0);\">'+ options.firstText +'</a></span>'; html += ' <span id="'+ options.prevId +'"><a href=\"javascript:void(0);\">'+ options.prevText +'</a></span>'; html += ' <span id="'+ options.nextId +'"><a href=\"javascript:void(0);\">'+ options.nextText +'</a></span>'; if(options.lastShow) html += ' <span id="'+ options.lastId +'"><a href=\"javascript:void(0);\">'+ options.lastText +'</a></span>'; }; html += options.controlsAfter; $(obj).after(html); }; if(options.numeric){ for(var i=0;i<s;i++){ $(document.createElement("li")) .attr('id',options.numericId + (i+1)) .html('<a rel='+ i +' href=\"javascript:void(0);\">'+ (i+1) +'</a>') .appendTo($("#"+ options.numericId)) .click(function(){ animate($("a",$(this)).attr('rel'),true); }); }; } else { $("a","#"+options.nextId).click(function(){ animate("next",true); }); $("a","#"+options.prevId).click(function(){ animate("prev",true); }); $("a","#"+options.firstId).click(function(){ animate("first",true); }); $("a","#"+options.lastId).click(function(){ animate("last",true); }); }; function setCurrent(i){ i = parseInt(i)+1; $("li", "#" + options.numericId).removeClass("current"); $("li#" + options.numericId + i).addClass("current"); }; function adjust(){ if(t>ts) t=0; if(t<0) t=ts; if(!options.vertical) { $("ul",obj).css("margin-left",(t*w*-1)); } else { $("ul",obj).css("margin-left",(t*h*-1)); } clickable = true; if(options.numeric) setCurrent(t); }; function animate(dir,clicked){ if (clickable){ clickable = false; var ot = t; switch(dir){ case "next": t = (ot>=ts) ? (options.continuous ? t+1 : ts) : t+1; break; case "prev": t = (t<=0) ? (options.continuous ? t-1 : 0) : t-1; break; case "first": t = 0; break; case "last": t = ts; break; default: t = dir; break; }; var diff = Math.abs(ot-t); var speed = diff*options.speed; if(!options.vertical) { p = (t*w*-1); $("ul",obj).animate( { marginLeft: p }, { queue:false, duration:speed, complete:adjust } ); } else { p = (t*h*-1); $("ul",obj).animate( { marginTop: p }, { queue:false, duration:speed, complete:adjust } ); }; if(!options.continuous && options.controlsFade){ if(t==ts){ $("a","#"+options.nextId).hide(); $("a","#"+options.lastId).hide(); } else { $("a","#"+options.nextId).show(); $("a","#"+options.lastId).show(); }; if(t==0){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); } else { $("a","#"+options.prevId).show(); $("a","#"+options.firstId).show(); }; }; if(clicked) clearTimeout(timeout); if(options.auto && dir=="next" && !clicked){; timeout = setTimeout(function(){ animate("next",false); },diff*options.speed+options.pause); }; }; }; // init var timeout; if(options.auto){; timeout = setTimeout(function(){ animate("next",false); },options.pause); }; if(options.numeric) setCurrent(0); if(!options.continuous && options.controlsFade){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); }; }); }; })(jQuery); Many thanks again! Alec

    Read the article

  • Android: restful API service

    - by Martyn
    Hey, I'm looking to make a service which I can use to make calls to a web based rest api. I've spent a couple of days looking through stackoverflow.com, reading books and looking at articles whilst playing about with some code and I can't get anything which I'm happy with. Basically I want to start a service on app init then I want to be able to ask that service to request a url and return the results. In the meantime I want to be able to display a progress window or something similar. I've created a service currently which uses IDL, I've read somewhere that you only really need this for cross app communication, so think these needs stripping out but unsure how to do callbacks without it. Also when I hit the post(Config.getURL("login"), values) the app seems to pause for a while (seems weird - thought the idea behind a service was that it runs on a different thread!) Currently I have a service with post and get http methods inside, a couple of AIDL files (for two way communication), a ServiceManager which deals with starting, stopping, binding etc to the service and I'm dynamically creating a Handler with specific code for the callbacks as needed. I don't want anyone to give me a complete code base to work on, but some pointers would be greatly appreciated; even if it's to say I'm doing it completely wrong. I'm pretty new to Android and Java dev so if there are any blindingly obvious mistakes here - please don't think I'm a rubbish developer, I'm just wet behind the ears and would appreciate being told exactly where I'm going wrong. Anyway, code in (mostly) full (really didn't want to put this much code here, but I don't know where I'm going wrong - apologies in advance): public class RestfulAPIService extends Service { final RemoteCallbackList<IRemoteServiceCallback> mCallbacks = new RemoteCallbackList<IRemoteServiceCallback>(); public void onStart(Intent intent, int startId) { super.onStart(intent, startId); } public IBinder onBind(Intent intent) { return binder; } public void onCreate() { super.onCreate(); } public void onDestroy() { super.onDestroy(); mCallbacks.kill(); } private final IRestfulService.Stub binder = new IRestfulService.Stub() { public void doLogin(String username, String password) { Message msg = new Message(); Bundle data = new Bundle(); HashMap<String, String> values = new HashMap<String, String>(); values.put("username", username); values.put("password", password); String result = post(Config.getURL("login"), values); data.putString("response", result); msg.setData(data); msg.what = Config.ACTION_LOGIN; mHandler.sendMessage(msg); } public void registerCallback(IRemoteServiceCallback cb) { if (cb != null) mCallbacks.register(cb); } }; private final Handler mHandler = new Handler() { public void handleMessage(Message msg) { // Broadcast to all clients the new value. final int N = mCallbacks.beginBroadcast(); for (int i = 0; i < N; i++) { try { switch (msg.what) { case Config.ACTION_LOGIN: mCallbacks.getBroadcastItem(i).userLogIn( msg.getData().getString("response")); break; default: super.handleMessage(msg); return; } } catch (RemoteException e) { } } mCallbacks.finishBroadcast(); } public String post(String url, HashMap<String, String> namePairs) {...} public String get(String url) {...} }; A couple of AIDL files: package com.something.android oneway interface IRemoteServiceCallback { void userLogIn(String result); } and package com.something.android import com.something.android.IRemoteServiceCallback; interface IRestfulService { void doLogin(in String username, in String password); void registerCallback(IRemoteServiceCallback cb); } and the service manager: public class ServiceManager { final RemoteCallbackList<IRemoteServiceCallback> mCallbacks = new RemoteCallbackList<IRemoteServiceCallback>(); public IRestfulService restfulService; private RestfulServiceConnection conn; private boolean started = false; private Context context; public ServiceManager(Context context) { this.context = context; } public void startService() { if (started) { Toast.makeText(context, "Service already started", Toast.LENGTH_SHORT).show(); } else { Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.startService(i); started = true; } } public void stopService() { if (!started) { Toast.makeText(context, "Service not yet started", Toast.LENGTH_SHORT).show(); } else { Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.stopService(i); started = false; } } public void bindService() { if (conn == null) { conn = new RestfulServiceConnection(); Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.bindService(i, conn, Context.BIND_AUTO_CREATE); } else { Toast.makeText(context, "Cannot bind - service already bound", Toast.LENGTH_SHORT).show(); } } protected void destroy() { releaseService(); } private void releaseService() { if (conn != null) { context.unbindService(conn); conn = null; Log.d(LOG_TAG, "unbindService()"); } else { Toast.makeText(context, "Cannot unbind - service not bound", Toast.LENGTH_SHORT).show(); } } class RestfulServiceConnection implements ServiceConnection { public void onServiceConnected(ComponentName className, IBinder boundService) { restfulService = IRestfulService.Stub.asInterface((IBinder) boundService); try { restfulService.registerCallback(mCallback); } catch (RemoteException e) {} } public void onServiceDisconnected(ComponentName className) { restfulService = null; } }; private IRemoteServiceCallback mCallback = new IRemoteServiceCallback.Stub() { public void userLogIn(String result) throws RemoteException { mHandler.sendMessage(mHandler.obtainMessage(Config.ACTION_LOGIN, result)); } }; private Handler mHandler; public void setHandler(Handler handler) { mHandler = handler; } } Service init and bind: // this I'm calling on app onCreate servicemanager = new ServiceManager(this); servicemanager.startService(); servicemanager.bindService(); application = (ApplicationState)this.getApplication(); application.setServiceManager(servicemanager); service function call: // this lot i'm calling as required - in this example for login progressDialog = new ProgressDialog(Login.this); progressDialog.setMessage("Logging you in..."); progressDialog.show(); application = (ApplicationState) getApplication(); servicemanager = application.getServiceManager(); servicemanager.setHandler(mHandler); try { servicemanager.restfulService.doLogin(args[0], args[1]); } catch (RemoteException e) { e.printStackTrace(); } ...later in the same file... Handler mHandler = new Handler() { public void handleMessage(Message msg) { switch (msg.what) { case Config.ACTION_LOGIN: if (progressDialog.isShowing()) { progressDialog.dismiss(); } try { ...process login results... } } catch (JSONException e) { Log.e("JSON", "There was an error parsing the JSON", e); } break; default: super.handleMessage(msg); } } }; Any and all help is greatly appreciated and I'll even buy you a coffee or a beer if you fancy :D Martyn

    Read the article

  • Implementing a robust async stream reader for a console

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. Edit: There are comments below correctly stating that since the data is coming from a stream, there is absolutely nothing that the receiver can infer about the structure of the data unless it is fully prepared to parse it. What I am trying to do here is leverage the "flush the output" "structure" that the owner of the console imparts while writing on it. I am prepared to assume (better: allow my caller to have the option to assume) that the OS will pass me the data written between two flushes of the stream in exactly one piece. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • getting Null pointer exception

    - by Abhijeet
    Hi I am getting this message on emulator when I run my android project: The application MediaPlayerDemo_Video.java (process com.android.MediaPlayerDemo_Video) has stopped unexpectedly. Please try again I am trying to run the MediaPlayerDemo_Video.java given in ApiDemos in the Samples given on developer.android.com. The code is : package com.android.MediaPlayerDemo_Video; import android.app.Activity; import android.media.AudioManager; import android.media.MediaPlayer; import android.media.MediaPlayer.OnBufferingUpdateListener; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnPreparedListener; import android.media.MediaPlayer.OnVideoSizeChangedListener; import android.os.Bundle; import android.util.Log; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.widget.Toast; public class MediaPlayerDemo_Video extends Activity implements OnBufferingUpdateListener, OnCompletionListener, OnPreparedListener, OnVideoSizeChangedListener, SurfaceHolder.Callback { private static final String TAG = "MediaPlayerDemo"; private int mVideoWidth; private int mVideoHeight; private MediaPlayer mMediaPlayer; private SurfaceView mPreview; private SurfaceHolder holder; private String path; private Bundle extras; private static final String MEDIA = "media"; // private static final int LOCAL_AUDIO = 1; // private static final int STREAM_AUDIO = 2; // private static final int RESOURCES_AUDIO = 3; private static final int LOCAL_VIDEO = 4; private static final int STREAM_VIDEO = 5; private boolean mIsVideoSizeKnown = false; private boolean mIsVideoReadyToBePlayed = false; /** * * Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.mediaplayer_2); mPreview = (SurfaceView) findViewById(R.id.surface); holder = mPreview.getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); extras = getIntent().getExtras(); } private void playVideo(Integer Media) { doCleanUp(); try { switch (Media) { case LOCAL_VIDEO: // Set the path variable to a local media file path. path = ""; if (path == "") { // Tell the user to provide a media file URL. Toast .makeText( MediaPlayerDemo_Video.this, "Please edit MediaPlayerDemo_Video Activity, " + "and set the path variable to your media file path." + " Your media file must be stored on sdcard.", Toast.LENGTH_LONG).show(); } break; case STREAM_VIDEO: /* * Set path variable to progressive streamable mp4 or * 3gpp format URL. Http protocol should be used. * Mediaplayer can only play "progressive streamable * contents" which basically means: 1. the movie atom has to * precede all the media data atoms. 2. The clip has to be * reasonably interleaved. * */ path = ""; if (path == "") { // Tell the user to provide a media file URL. Toast .makeText( MediaPlayerDemo_Video.this, "Please edit MediaPlayerDemo_Video Activity," + " and set the path variable to your media file URL.", Toast.LENGTH_LONG).show(); } break; } // Create a new media player and set the listeners mMediaPlayer = new MediaPlayer(); mMediaPlayer.setDataSource(path); mMediaPlayer.setDisplay(holder); mMediaPlayer.prepare(); mMediaPlayer.setOnBufferingUpdateListener(this); mMediaPlayer.setOnCompletionListener(this); mMediaPlayer.setOnPreparedListener(this); mMediaPlayer.setOnVideoSizeChangedListener(this); mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); } catch (Exception e) { Log.e(TAG, "error: " + e.getMessage(), e); } } public void onBufferingUpdate(MediaPlayer arg0, int percent) { Log.d(TAG, "onBufferingUpdate percent:" + percent); } public void onCompletion(MediaPlayer arg0) { Log.d(TAG, "onCompletion called"); } public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { Log.v(TAG, "onVideoSizeChanged called"); if (width == 0 || height == 0) { Log.e(TAG, "invalid video width(" + width + ") or height(" + height + ")"); return; } mIsVideoSizeKnown = true; mVideoWidth = width; mVideoHeight = height; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void onPrepared(MediaPlayer mediaplayer) { Log.d(TAG, "onPrepared called"); mIsVideoReadyToBePlayed = true; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) { Log.d(TAG, "surfaceChanged called"); } public void surfaceDestroyed(SurfaceHolder surfaceholder) { Log.d(TAG, "surfaceDestroyed called"); } public void surfaceCreated(SurfaceHolder holder) { Log.d(TAG, "surfaceCreated called"); playVideo(extras.getInt(MEDIA)); } @Override protected void onPause() { super.onPause(); releaseMediaPlayer(); doCleanUp(); } @Override protected void onDestroy() { super.onDestroy(); releaseMediaPlayer(); doCleanUp(); } private void releaseMediaPlayer() { if (mMediaPlayer != null) { mMediaPlayer.release(); mMediaPlayer = null; } } private void doCleanUp() { mVideoWidth = 0; mVideoHeight = 0; mIsVideoReadyToBePlayed = false; mIsVideoSizeKnown = false; } private void startVideoPlayback() { Log.v(TAG, "startVideoPlayback"); holder.setFixedSize(mVideoWidth, mVideoHeight); mMediaPlayer.start(); } } I think the above message is due to Null pointer exception , however I may be false. I am unable to find where the error is . So , Please someone help me out .

    Read the article

  • RemoteViewsService never gets called whenever we update app widget

    - by user3689160
    I am making one task widget application. This is a simple widget application which can be used to add new tasks by clicking on "+New Task". So ideally what I have done is, I've added a widget first, then when we click on "+New Task" a new activity opens up from where we can type in a new task and it should get updated in the ListView inside the home screen widget. Problem: Now whenever I add a new task, the ListView does not get updated. The data gets inside the database a new is created as well. My onUpdate(...) method gets called as well. There is a WidgetService.java which is a RemoteViewService that is being used to call the RemoteViewFactory but WidgetService.java gets called only when we create the widget, never after that. WidgetService.java never gets called again. I have the following setup MyWidgetProvider.java AppWidgetProvider class for updating the widget and filling its remote views. public class MyWidgetProvider extends AppWidgetProvider { public static int randomNumber=132; static RemoteViews remoteViews = null; @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { updateList(context,appWidgetManager,appWidgetIds); } public static void updateList(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds){ final int N = appWidgetIds.length; Log.d("MyWidgetProvider", "length="+appWidgetIds.length+""); for (int i = 0; i<N; ++i) { Log.d("MyWidgetProvider", "value"+ i + "="+appWidgetIds[i]); // Calling updateWidgetListView(context, appWidgetIds[i]); to load the listview with data remoteViews = updateWidgetListView(context, appWidgetIds[i]); // Updating the Widget with the new data filled remoteviews appWidgetManager.updateAppWidget(appWidgetIds[i], remoteViews); appWidgetManager.notifyAppWidgetViewDataChanged(appWidgetIds[i], R.id.lv_tasks); } } private static RemoteViews updateWidgetListView(Context context, int appWidgetId) { Intent svcIntent = null; remoteViews = new RemoteViews(context.getPackageName(),R.layout.widget_demo); //RemoteViews Service needed to provide adapter for ListView svcIntent = new Intent(context, WidgetService.class); //passing app widget id to that RemoteViews Service svcIntent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, appWidgetId); //setting a unique Uri to the intent //binding the data from the WidgetService to the svcIntent svcIntent.setData(Uri.parse(svcIntent.toUri(Intent.URI_INTENT_SCHEME))); //setting adapter to listview of the widget remoteViews.setRemoteAdapter(R.id.lv_tasks, svcIntent); //setting click listner on the "New Task" (TextView) of the widget remoteViews.setOnClickPendingIntent(R.id.tv_add_task, buildButtonPendingIntent(context)); return remoteViews; } // Just to create PendingIntent, this will be broadcasted everytime textview containing "New Task" is clicked and OnReceive method of MyWidgetIntentReceiver.java will run public static PendingIntent buildButtonPendingIntent(Context context) { Intent intent = new Intent(); intent.putExtra("TASK_DONE_VALUE_VALID", false); intent.setAction("com.ommzi.intent.action.CLEARTASK"); return PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); } // This method is defined here so that we can update the remoteviews from other classes as well, like we'll do it in MyWidgetIntentReceiver.java public static void pushWidgetUpdate(Context context, RemoteViews remoteViews) { ComponentName myWidget = new ComponentName(context, MyWidgetProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(context); manager.updateAppWidget(myWidget, remoteViews); } } WidgetService.java A RemoteViewsService class for creating a RemoteViewsFactory. public class WidgetService extends RemoteViewsService { @Override public RemoteViewsFactory onGetViewFactory(Intent intent) { int appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); return (new ListProvider(this.getApplicationContext(), intent)); } } ListProvider.java A RemoteViewsFactory class for filling the ListView inside the widget. public class ListProvider implements RemoteViewsFactory { private List<TemplateTaskData> tasks; private Context context = null; private int appWidgetId; int increment=0; public ListProvider(Context context, Intent intent) { this.context = context; appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); //appWidgetId = Integer.valueOf(intent.getData().getSchemeSpecificPart())- MyWidgetProvider.randomNumber; populateListItem(); } private void populateListItem() { tasks = new ArrayList<TemplateTaskData>(); com.ommzi.database.sqliteDataBase letStartDB = new com.ommzi.database.sqliteDataBase(context); letStartDB.open(); tasks=letStartDB.getData(); letStartDB.close(); } public int getCount() { return tasks.size(); } public long getItemId(int position) { return position; } public RemoteViews getViewAt(int position) { // We are only showing the text view field here as there is limitation on using Checkbox within the widget // We prefer to on an activity for the user to make comprehensive changes as changes inside the widget is always limited. // To view the supported list of views for the widget you can visit http://developer.android.com/guide/topics/appwidgets/index.html final RemoteViews remoteView = new RemoteViews(context.getPackageName(), R.layout.screen_wcitem); TemplateTaskData taskDTO = tasks.get(position); Log.d("My ListProvider", "1"); remoteView.setTextViewText(R.id.wc_add_task, taskDTO.getTaskName()); if(taskDTO.getTaskDone()){ remoteView.setImageViewResource(R.id.imageView1, R.drawable.checked); }else{ remoteView.setImageViewResource(R.id.imageView1, R.drawable.unchecked); } Intent intent = new Intent(); intent.setAction("com.ommzi.intent.action.GETTASK"); intent.putExtra("TASK_DONE_VALUE_VALID", true); intent.putExtra("ROWID", taskDTO.getRowId()); intent.putExtra("TASK_NAME", taskDTO.getTaskName()); intent.putExtra("TASK_DONE_VALUE", taskDTO.getTaskDone()); remoteView.setOnClickPendingIntent(R.id.imageView1, PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT)); Log.d("My ListProvider, Inside getView", "Value of Incrementor=" + increment + "\n ROWID=" + taskDTO.getRowId() + "\n TASK_NAME=" + taskDTO.getTaskName() + "\n TASK_DONE_VALUE=" +taskDTO.getTaskDone() + "\n Total Database Size=" + tasks.size()); return remoteView; } public RemoteViews getLoadingView() { // TODO Auto-generated method stub return null; } public int getViewTypeCount() { // TODO Auto-generated method stub return 1; } public boolean hasStableIds() { // TODO Auto-generated method stub return false; } public void onCreate() { // TODO Auto-generated method stub } public void onDataSetChanged() { // TODO Auto-generated method stub } public void onDestroy() { // TODO Auto-generated method stub } } GetTaskActivity.java This is another activity that is fired from a BroadcastReceiver(MyWidgetIntentReceiver.java). Purpose of this is to get a new task added to the list of task. MyWidgetIntentReceiver.java BroadcastReceiver fired using a pending intent that starts an activity GetTaskActivity.java. Please help me out with this problem. I am seen all the posts here and on other websites no body is having any solution. But I am sure a solution exist to this. Thanks for your help!

    Read the article

  • Where should I put zoomIn in my MapActivity?

    - by Johny
    I'm writing an Android app, and I'd like to zoomIn as soon as the map has been loaded. I get the following error: java.lang.IllegalArgumentException: width and height must be > 0 This MapActivity - width and height must be > 0 question suggests the problem is the zoomIn() method is in the onCreate() method. But I get same error when I put it in the onResume() method. I've been searching for hours and I can't find anything about it at http://developer.android.com or anywhere else... Also I can't find a way to get the time point the map has been loaded. A "MapLoadedListener" or something like that... EDIT Here is my code: public class AMap extends MapActivity{ private final String LOG_TAG = this.getClass().getSimpleName(); private Context mContext; private Chronometer timer; private TextView tvCountdown; private RelativeLayout rl; private MapView mapView; private MapController mapController; private List<Overlay> mapOverlays; private PlayersOverlay playersOverlay; private Drawable drawable; private Builder endDialog; private ContextThemeWrapper ctw; private Handler mHandler = new Handler(); private Player player = new Player(); private StartTask startTask; private EndTask endTask; private MyDBAdapter mdba; private Cursor playersCursor; private UpdateBroadcastReceiver r; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map_view); mContext = AMap.this; // set map mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); mapView.setFocusable(true); // find the relative layout rl = (RelativeLayout) findViewById(R.id.rl); // set the chronometer timer = (Chronometer) findViewById(R.id.tv_timer); timer.setBackgroundColor(Color.DKGRAY); // set the countdown textview tvCountdown = (TextView) findViewById(R.id.tv_countdown); // Open DB connection and get players Cursor mdba = new MyDBAdapter(mContext); mdba.open(); playersCursor = mdba.getGame(); // Get this player's id and location Intent starter = this.getIntent(); player.setId(starter.getIntExtra("id", 0)); player.setLatitude(starter.getDoubleExtra("lat", 0)); player.setLongitude(starter.getDoubleExtra("lon", 0)); // Set this player's location as map's center GeoPoint geoPoint = new GeoPoint((int) (player.getLatitude()*1E6), (int) (player.getLongitude()*1E6)); mapController = mapView.getController(); mapController.setCenter(geoPoint); mapController.setZoom(15); Log.d(LOG_TAG, "My playersCursor has "+playersCursor.getCount()+" rows"); // drawable is needed but not used drawable = this.getResources().getDrawable(R.drawable.ic_launcher); // set PlayersOverlay (locations and statuses) playersOverlay = new PlayersOverlay(player.getId(), playersCursor, drawable, this); mapOverlays = mapView.getOverlays(); mapOverlays.add(playersOverlay); mHandler.postDelayed(mUpdateTimeTask, 100); } private Runnable mUpdateTimeTask = new Runnable() { public void run() { int h = mapView.getLayoutParams().height; int w = mapView.getLayoutParams().width; Log.d(LOG_TAG, "w = "+w+" , h = "+h); mHandler.postAtTime(this, System.currentTimeMillis() + 1000); } }; @Override public void onAttachedToWindow(){ Log.d(LOG_TAG, "Attached to Window"); int h = mapView.getLayoutParams().height; int w = mapView.getLayoutParams().width; Log.d(LOG_TAG, " Attached to window: w = "+w+" , h = "+h); //mapController.zoomInFixing(screenPoint.x, screenPoint.y); } public void onWindowFocusChanged(boolean hasFocus){ Log.d(LOG_TAG, "Focus changed to: "+hasFocus); int h = mapView.getLayoutParams().height; int w = mapView.getLayoutParams().width; Log.d(LOG_TAG, " Window focus changed: w = "+w+" , h = "+h); //mapController.zoomInFixing(screenPoint.x, screenPoint.y); } @Override protected void onStart(){ super.onStart(); // Create and register the broadcast receiver for messages from service IntentFilter filter = new IntentFilter(AppConstants.iGAME_UPDATE); r = new UpdateBroadcastReceiver(); registerReceiver(r, filter); // Create the dialog for end of game ctw = new ContextThemeWrapper(mContext, android.R.style.Theme_Translucent_NoTitleBar_Fullscreen); endDialog = new AlertDialog.Builder(ctw); endDialog.setMessage("End of Game"); endDialog.setCancelable(false); endDialog.setNeutralButton("OK", new OnClickListener(){ @Override public void onClick(DialogInterface dialog, int which) { Intent highScores = new Intent(AMap.this, HighScores.class); startActivity(highScores); playersCursor.close(); finish(); } }); } @Override protected void onStop() { if(!playersCursor.isClosed()) playersCursor.close(); unregisterReceiver(r); mdba.close(); super.onStop(); } @Override protected boolean isRouteDisplayed() { return false; } // Receives signal from NetworkService that DB has been updated public class UpdateBroadcastReceiver extends BroadcastReceiver { boolean startSignal, update, endSignal; @Override public void onReceive(Context context, Intent intent) { endSignal = intent.getBooleanExtra("endSignal", false); if(endSignal){ Log.d(LOG_TAG, "Game Update BroadcastReceiver received End Signal"); endTask = new EndTask(); endTask.execute(); return; } update = intent.getBooleanExtra("update", false); if(update){ Log.d(LOG_TAG, "Game Update BroadcastReceiver received game update"); playersCursor.requery(); mapView.invalidate(); return; } startSignal = intent.getBooleanExtra("startSignal", false); if(startSignal){ Log.d(LOG_TAG, "Game Update BroadcastReceiver received Start Signal"); startTask = new StartTask(); startTask.execute(); return; } } } class StartTask extends AsyncTask<Void,Integer,Void>{ private final ToneGenerator tg = new ToneGenerator(AudioManager.STREAM_NOTIFICATION, 100); private final long DELAY = 1200; @Override protected Void doInBackground(Void... params) { int i = 3; while(i>=0){ publishProgress(i); try { Thread.sleep(DELAY); } catch (InterruptedException e) { e.printStackTrace(); } i--; } return null; } @Override protected void onProgressUpdate(Integer... progress){ tg.startTone(ToneGenerator.TONE_PROP_PROMPT); tvCountdown.setText(""+progress[0]); } @Override protected void onPostExecute(Void result) { rl.removeView(tvCountdown); timer.setBase(SystemClock.elapsedRealtime()); timer.start(); //enable screen touches playersOverlay.setGameStarted(true); } } class EndTask extends AsyncTask<Void,Void,Void>{ @Override protected void onPreExecute(){ //disable screen touches playersOverlay.setEndOfGame(true); timer.stop(); } @Override protected Void doInBackground(Void... params) { return null; } @Override protected void onPostExecute(Void result) { try{ endDialog.show(); }catch(Exception e){ Toast.makeText(mContext, "End of game", Toast.LENGTH_LONG); Intent highScores = new Intent(AMap.this, HighScores.class); startActivity(highScores); playersCursor.close(); finish(); } mHandler.removeCallbacks(mUpdateTimeTask); } } }

    Read the article

  • Problem with jquery ajax form on Codeigniter

    - by Code Burn
    Everytime I test the email is send correctly. (I have tested in PC: IE6, IE7, IE8, Safari, Firefox, Chrome. MAC: Safari, Firefox, Chrome.) The _POST done in jquery (javascript). Then when I turn off javascript in my browser nothing happens, because nothing is _POSTed. Nome: Jon Doe Empresa: Star Cargo: Developer Email: [email protected] Telefone: 090909222988 Assunto: Subject here.. But I keep recieving emails like this from costumers: Nome: Empresa: Cargo: Email: Telefone: Assunto: CONTACT_FORM.PHP <form name="frm" id="frm"> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Nome<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="Cnome" id="Cnome" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Empresa<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CEmpresa" id="CEmpresa" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Cargo</div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CCargo" id="CCargo" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Email<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CEmail" id="CEmail" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Telefone</div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CTelefone" id="CTelefone" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Assunto<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><textarea class="texto textocinzaescuro" name="CAssunto" id="CAssunto" rows="2" cols="28"></textarea></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >&nbsp;</div> <div class="campoFormulario inputDeCampo" style="text-align:right;" ><input id="Cbutton" class="texto textocinzaescuro" type="submit" name="submit" value="Enviar" /></div> </form> <script type="text/javascript"> $(function() { $("#Cbutton").click(function() { if(validarForm()){ var Cnome = $("input#Cnome").val(); var CEmpresa = $("input#CEmpresa").val(); var CEmail = $("input#CEmail").val(); var CCargo = $("input#CCargo").val(); var CTelefone = $("input#CTelefone").val(); var CAssunto = $("textarea#CAssunto").val(); var dataString = 'nome='+ Cnome + '&Empresa=' + CEmpresa + '&Email=' + CEmail + '&Cargo=' + CCargo + '&Telefone=' + CTelefone + '&Assunto=' + CAssunto; //alert (dataString);return false; $.ajax({ type: "POST", url: "http://www.myserver.com/index.php/pt/envia", data: dataString, success: function() { $('#frm').remove(); $('#blocoform').append("<br />Obrigado. <img id='checkmark' src='http://www.myserver.com/public/images/estrutura/ok.gif' /><br />Será contactado brevemente.<br /><br /><br /><br /><br /><br />") .hide() .fadeIn(1500); } }); } return false; }); }); function validarForm(){ var error = 0; if(!validateNome(document.getElementById("Cnome"))){ error = 1 ;} if(!validateNome(document.getElementById("CEmpresa"))){ error = 1 ;} if(!validateEmail(document.getElementById("CEmail"))){ error = 1 ;} if(!validateNome(document.getElementById("CAssunto"))){ error = 1 ;} if(error == 0){ //frm.submit(); return true; }else{ alert('Preencha os campos correctamente.'); return false; } } function validateNome(fld){ if( fld.value.length == 0 ){ fld.style.backgroundColor = '#FFFFCC'; //alert('Descrição é um campo obrigatório.'); return false; }else { fld.style.background = 'White'; return true; } } function trim(s) { return s.replace(/^\s+|\s+$/, ''); } function validateEmail(fld) { var tfld = trim(fld.value); var emailFilter = /^[^@]+@[^@.]+\.[^@]*\w\w$/ ; var illegalChars= /[\(\)\<\>\,\;\:\\\"\[\]]/ ; if (fld.value == "") { fld.style.background = '#FFFFCC'; //alert('Email é um campo obrigatório.'); return false; } else if (!emailFilter.test(tfld)) { //alert('Email inválido.'); fld.style.background = '#FFFFCC'; return false; } else if (fld.value.match(illegalChars)) { fld.style.background = '#FFFFCC'; //alert('Email inválido.'); return false; } else { fld.style.background = 'White'; return true; } } </script> FUNCTION ENVIA (email sender): function envia() { $this->load->helper(array('form', 'url')); $nome = $_POST['nome']; $empresa = $_POST['Empresa']; $cargo = $_POST['Cargo']; $email = $_POST['Email']; $telefone = $_POST['Telefone']; $assunto = $_POST['Assunto']; $mensagem = " Nome:".$nome." Empresa:".$empresa." Cargo:".$cargo." Email:".$email." Telefone:".$telefone." Assunto:".$assunto.""; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: no-reply' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail('[email protected]', $mensagem, $headers); }

    Read the article

  • TCP RST Reset Every 5 Minutes on Windows 2003 sp2

    - by Dan
    Hey, Recently I had a web developer come to me and ask why he was receiving connection errors in his app that was accessing a sql database. So, I went through my normal trouble shooting steps to isolate or reproduce the issue. I discovered that if I connected to the database using Query Analyzer and let the connection idle for 5 minutes it would disconnect. Meaning... I would no longer be able to refresh my tables or any other object/node within the object browser in Query Analyzer. I would have to right click on the instance and refresh for it to re-establish the connection. Next I went to wireshark and ran a capture on the client pc's nic card. Sure enough it was receiving a TCP RST reset every 5 min if the connection idled longer than 5 min. I also ran a capture on the SQL Server and noticed the TCP RST reset command as well. Attached below is the capture from the client Machine. If someone could please assist... That would be great. -I checked all settings within SQL Server 2000 against another server and they all seem to be the same. -Issue does not occur if I connect to any other SQL server 2000 server. -Issue does not occur if connecting to SQL on the server itself... so only over the network. -I consulted with network team and this is the response back: There are no firewalls or proxies in between SQL Server and your desktop. The traffic flows like this: Desktop-Access Switch-Distro Switch-Core Switch-Datacenter Switch-SQL Server None of the switches have security ACL’s configured on them. Also they stated that NAT was not turned on. -Issue does not occur with SQL server Enterprise Manager. -Ran SQL Profiler at the same time and did not see anything out of the ordinary during the RST I HAVE SEARCHED HIGH AND LOW ON GOOGLE FOR A RESOLUTION FOR THIS ISSUE. NO LUCK! My questions are: What could be causing this? Wrong Sequence number? setting in a router or switch the network team may have over looked? Setting within Windows? Setting within SQL Server 2000 that I have over looked? Better way to utilize Wireshark to find more answers? RST is about 10 from the bottom. No. Time Source Destination Protocol Info 258 24.390708 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [SYN] Seq=0 Len=0 MSS=1260 259 24.401679 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 260 24.401729 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 261 24.402212 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 262 24.413335 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 285 24.466512 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 286 24.466536 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=437 289 24.478168 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=38 Ack=1740 Win=64240 Len=0 290 24.480078 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=38 Ack=1740 Win=64240 Len=385 293 24.493629 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1740 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 294 24.504637 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=423 Ack=1800 Win=64180 Len=17 295 24.533197 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1800 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 296 24.544098 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=440 Ack=1844 Win=64136 Len=17 297 24.544524 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1844 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 298 24.558033 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=457 Ack=1902 Win=64078 Len=31 299 24.558493 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1902 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 300 24.569984 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=488 Ack=1994 Win=63986 Len=70 301 24.577395 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1994 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=448 303 24.589834 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=558 Ack=2442 Win=63538 Len=64 304 24.590122 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [FIN, ACK] Seq=2442 Ack=622 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 305 24.601094 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=622 Ack=2443 Win=63538 Len=0 306 24.601659 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [FIN, ACK] Seq=622 Ack=2443 Win=63538 Len=0 307 24.601686 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=2443 Ack=623 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 321 25.839371 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [SYN] Seq=0 Len=0 MSS=1260 322 25.850291 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 323 25.850321 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 324 25.850660 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 325 25.861573 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 326 25.863103 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 327 25.863130 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=463 328 25.874417 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [ACK] Seq=38 Ack=1766 Win=64240 Len=0 329 25.876315 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=38 Ack=1766 Win=64240 Len=385 330 25.876905 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1766 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 331 25.887773 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=423 Ack=1826 Win=64180 Len=17 332 25.888299 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1826 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 333 25.899169 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=440 Ack=1870 Win=64136 Len=17 334 25.899574 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1870 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 335 25.910618 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=457 Ack=1928 Win=64078 Len=31 336 25.911051 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1928 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 337 25.922068 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=488 Ack=2020 Win=63986 Len=70 338 25.922500 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2020 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=34 339 25.933621 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=558 Ack=2054 Win=63952 Len=29 340 25.941165 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2054 Ack=587 Win=64949 [TCP CHECKSUM INCORRECT] Len=54 341 25.952164 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=587 Ack=2108 Win=63898 Len=17 342 25.952993 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2108 Ack=604 Win=64932 [TCP CHECKSUM INCORRECT] Len=72 343 25.963889 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=604 Ack=2180 Win=63826 Len=17 344 25.964366 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2180 Ack=621 Win=64915 [TCP CHECKSUM INCORRECT] Len=52 345 25.975253 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=621 Ack=2232 Win=63774 Len=17 346 25.975590 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2232 Ack=638 Win=64898 [TCP CHECKSUM INCORRECT] Len=32 347 25.986588 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=638 Ack=2264 Win=63742 Len=167 348 25.987262 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2264 Ack=805 Win=64731 [TCP CHECKSUM INCORRECT] Len=512 349 25.998464 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=805 Ack=2776 Win=63230 Len=89 350 25.998861 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2776 Ack=894 Win=64642 [TCP CHECKSUM INCORRECT] Len=46 351 26.009849 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=894 Ack=2822 Win=63184 Len=17 352 26.010175 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2822 Ack=911 Win=64625 [TCP CHECKSUM INCORRECT] Len=80 353 26.021220 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=911 Ack=2902 Win=63104 Len=33 354 26.022613 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2902 Ack=944 Win=64592 [TCP CHECKSUM INCORRECT] Len=498 355 26.034018 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=944 Ack=3400 Win=64240 Len=89 356 26.046501 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [SYN] Seq=0 Len=0 MSS=1260 357 26.057323 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 358 26.057355 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 359 26.057661 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 361 26.068606 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 362 26.070087 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 363 26.070113 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=485 364 26.081336 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [ACK] Seq=38 Ack=1788 Win=64240 Len=0 365 26.083330 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=38 Ack=1788 Win=64240 Len=385 366 26.083943 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1788 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=46 368 26.094921 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=423 Ack=1834 Win=64194 Len=17 369 26.095317 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1834 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=48 370 26.107553 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=440 Ack=1882 Win=64146 Len=877 371 26.241285 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0 372 26.241307 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1882 Ack=1317 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 653 55.913838 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 654 55.924547 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 910 85.887176 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 911 85.898010 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1155 115.859520 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1156 115.870285 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1395 145.934403 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1396 145.945938 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1649 175.906767 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1650 175.917741 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1887 205.881080 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1888 205.891818 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2112 235.854408 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2113 235.865482 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2398 265.928342 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2399 265.939242 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2671 295.900714 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2672 295.911590 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2880 315.705029 x.x.x.10 x.x.x.99 TCP 2226 14493 [RST] Seq=1317 Len=0 2973 325.975607 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2974 325.986337 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2975 326.154327 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive] 2226 14492 [ACK] Seq=1032 Ack=3400 Win=64240 Len=1 2976 326.154350 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive ACK] 14492 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0

    Read the article

  • Sendmail Failing to Forward Locally Addressed Mail to Exchange Server

    - by DomainSoil
    I've recently gained employment as a web developer with a small company. What they neglected to tell me upon hire was that I would be administrating the server along with my other daily duties. Now, truth be told, I'm not clueless when it comes to these things, but this is my first rodeo working with a rack server/console.. However, I'm confident that I will be able to work through any solutions you provide. Short Description: When a customer places an order via our (Magento CE 1.8.1.0) website, a copy of said order is supposed to be BCC'd to our sales manager. I say supposed because this was a working feature before the old administrator left. Long Description: Shortly after I started, we had a server crash which required a server restart. After restart, we noticed a few features on our site weren't working, but all those have been cleaned up except this one. I had to create an account on our server for root access. When a customer places an order, our sites software (Magento CE 1.8.1.0) is configured to BCC the customers order email to our sales manager. We use a Microsoft Exchange 2007 Server for our mail, which is hosted on a different machine (in-house) that I don't have access to ATM, but I'm sure I could if needed. As far as I can tell, all other external emails work.. Only INTERNAL email addresses fail to deliver. I know this because I've also tested my own internal address via our website. I set up an account with an internal email, made a test order, and never received the email. I changed my email for the account to an external GMail account, and received emails as expected. Let's dive into the logs and config's. For privacy/security reasons, names have been changed to the following: domain.com = Our Top Level Domain. email.local = Our Exchange Server. example.com = ANY other TLD. OLDadmin = Our previous Server Administrator. NEWadmin = Me. SALES@ = Our Sales Manager. Customer# = A Customer. Here's a list of the programs and config files used that hold relevant for this issue: Server: > [root@www ~]# cat /etc/centos-release CentOS release 6.3 (final) Sendmail: > [root@www ~]# sendmail -d0.1 -bt < /dev/null Version 8.14.4 ========SYSTEM IDENTITY (after readcf)======== (short domain name) $w = domain (canonical domain name) $j = domain.com (subdomain name) $m = com (node name) $k = www.domain.com > [root@www ~]# rpm -qa | grep -i sendmail sendmail-cf-8.14.4-8.e16.noarch sendmail-8.14-4-8.e16.x86_64 nslookup: > [root@www ~]# nslookup email.local Name: email.local Address: 192.168.1.50 hostname: > [root@www ~]# hostname www.domain.com /etc/mail/access: > [root@www ~]# vi /etc/mail/access Connect:localhost.localdomain RELAY Connect:localhost RELAY Connect:127.0.0.1 RELAY /etc/mail/domaintable: > [root@www ~]# vi /etc/mail/domaintable # /etc/mail/local-host-names: > [root@www ~]# vi /etc/mail/local-host-names # /etc/mail/mailertable: > [root@www ~]# vi /etc/mail/mailertable # /etc/mail/sendmail.cf: > [root@www ~]# vi /etc/mail/sendmail.cf ###################################################################### ##### ##### DO NOT EDIT THIS FILE! Only edit the source .mc file. ##### ###################################################################### ###################################################################### ##### $Id: cfhead.m4,v 8.120 2009/01/23 22:39:21 ca Exp $ ##### ##### $Id: cf.m4,v 8.32 1999/02/07 07:26:14 gshapiro Exp $ ##### ##### setup for linux ##### ##### $Id: linux.m4,v 8.13 2000/09/17 17:30:00 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: no_default_msa.m4,v 8.2 2001/02/14 05:03:22 gshapiro Exp $ ##### ##### $Id: smrsh.m4,v 8.14 1999/11/18 05:06:23 ca Exp $ ##### ##### $Id: mailertable.m4,v 8.25 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: virtusertable.m4,v 8.23 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: redirect.m4,v 8.15 1999/08/06 01:47:36 gshapiro Exp $ ##### ##### $Id: always_add_domain.m4,v 8.11 2000/09/12 22:00:53 ca Exp $ ##### ##### $Id: use_cw_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: use_ct_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: access_db.m4,v 8.27 2006/07/06 21:10:10 ca Exp $ ##### ##### $Id: blacklist_recipients.m4,v 8.13 1999/04/02 02:25:13 gshapiro Exp $ ##### ##### $Id: accept_unresolvable_domains.m4,v 8.10 1999/02/07 07:26:07 gshapiro Exp $ ##### ##### $Id: masquerade_envelope.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: masquerade_entire_domain.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: proto.m4,v 8.741 2009/12/11 00:04:53 ca Exp $ ##### # level 10 config file format V10/Berkeley # override file safeties - setting this option compromises system security, # addressing the actual file configuration problem is preferred # need to set this before any file actions are encountered in the cf file #O DontBlameSendmail=safe # default LDAP map specification # need to set this now before any LDAP maps are defined #O LDAPDefaultSpec=-h localhost ################## # local info # ################## # my LDAP cluster # need to set this before any LDAP lookups are done (including classes) #D{sendmailMTACluster}$m Cwlocalhost # file containing names of hosts for which we receive email Fw/etc/mail/local-host-names # my official domain name # ... define this only if sendmail cannot automatically determine your domain #Dj$w.Foo.COM # host/domain names ending with a token in class P are canonical CP. # "Smart" relay host (may be null) DSemail.local # operators that cannot be in local usernames (i.e., network indicators) CO @ % ! # a class with just dot (for identifying canonical names) C.. # a class with just a left bracket (for identifying domain literals) C[[ # access_db acceptance class C{Accept}OK RELAY C{ResOk}OKR # Hosts for which relaying is permitted ($=R) FR-o /etc/mail/relay-domains # arithmetic map Karith arith # macro storage map Kmacro macro # possible values for TLS_connection in access map C{Tls}VERIFY ENCR # who I send unqualified names to if FEATURE(stickyhost) is used # (null means deliver locally) DRemail.local. # who gets all local email traffic # ($R has precedence for unqualified names if FEATURE(stickyhost) is used) DHemail.local. # dequoting map Kdequote dequote # class E: names that should be exposed as from this host, even if we masquerade # class L: names that should be delivered locally, even if we have a relay # class M: domains that should be converted to $M # class N: domains that should not be converted to $M #CL root C{E}root C{w}localhost.localdomain C{M}domain.com # who I masquerade as (null for no masquerading) (see also $=M) DMdomain.com # my name for error messages DnMAILER-DAEMON # Mailer table (overriding domains) Kmailertable hash -o /etc/mail/mailertable.db # Virtual user table (maps incoming users) Kvirtuser hash -o /etc/mail/virtusertable.db CPREDIRECT # Access list database (for spam stomping) Kaccess hash -T<TMPF> -o /etc/mail/access.db # Configuration version number DZ8.14.4 /etc/mail/sendmail.mc: > [root@www ~]# vi /etc/mail/sendmail.mc divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # /etc/mail/make dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # define(`SMART_HOST', `email.local')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES', `True')dnl define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /etc/pki/tls/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /etc/pki/tls/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # MASQUERADE_AS(`domain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # FEATURE(masquerade_entire_domain)dnl dnl # MASQUERADE_DOMAIN(domain.com)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl /etc/mail/trusted-users: > [root@www ~]# vi /etc/mail/trusted-users # /etc/mail/virtusertable: > [root@www ~]# vi /etc/mail/virtusertable [email protected] [email protected] [email protected] [email protected] /etc/hosts: > [root@www ~]# vi /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.50 email.local I've only included the "local info" part of sendmail.cf, to save space. If there are any files that I've missed, please advise so I may produce them. Now that that's out of the way, lets look at some entries from /var/log/maillog. The first entry is from an order BEFORE the crash, when the site was working as expected. ##Order 200005374 Aug 5, 2014 7:06:38 AM## Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: from=OLDadmin, size=11091, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 07:06:39 www sendmail[26150]: s75C6dXe026150: from=<[email protected]>, size=11257, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: [email protected],=?utf-8?B?dGhvbWFzICBHaWxsZXNwaWU=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71091, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75C6dXe026150 Message accepted for delivery) Aug 5 07:06:40 www sendmail[26152]: s75C6dXe026150: to=<[email protected]>,<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=161257, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) This next entry from maillog is from an order AFTER the crash. ##Order 200005375 Aug 5, 2014 9:45:25 AM## Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: from=OLDadmin, size=11344, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: [email protected], ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: from=<[email protected]>, size=11500, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: to==?utf-8?B?S2VubmV0aCBCaWViZXI=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm1030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: s75EjQ4P030021: DSN: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: to=OLDadmin, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=42368, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: from=<>, size=12368, class=0, nrcpts=0, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: s75EjQ4Q030021: return to sender: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm5030022: from=<>, size=14845, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4Q030021: to=postmaster, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=43392, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm5030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30025]: s75EjQm5030022: to=root, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=45053, dsn=2.0.0, stat=Sent Aug 5 09:45:27 www sendmail[30024]: s75EjQm1030022: to=<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=131500, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) To add a little more, I think I've pinpointed the actual crash event. ##THE CRASH## Aug 5 09:39:46 www sendmail[3251]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[3260]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[29370]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:39:47 www sendmail[29372]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee23t029466 Message accepted for delivery) Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee2wh029467 Message accepted for delivery) Aug 5 09:40:06 www sm-msp-queue[29370]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sendmail[29372]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sm-msp-queue[29888]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:40:06 www sendmail[29890]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: s75Ee6xY029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xY029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: s75Ee6xZ029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xZ029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Something to note about the maillog's: Before the crash, the msgid included localhost.localdomain; after the crash it's been domain.com. Thanks to all who take the time to read and look into this issue. I appreciate it and look forward to tackling this issue together.

    Read the article

  • Help getting frame rate (fps) up in Python + Pygame

    - by Jordan Magnuson
    I am working on a little card-swapping world-travel game that I sort of envision as a cross between Bejeweled and the 10 Days geography board games. So far the coding has been going okay, but the frame rate is pretty bad... currently I'm getting low 20's on my Core 2 Duo. This is a problem since I'm creating the game for Intel's March developer competition, which is squarely aimed at netbooks packing underpowered Atom processors. Here's a screen from the game: ![www.necessarygames.com/my_games/betraveled/betraveled-fps.png][1] I am very new to Python and Pygame (this is the first thing I've used them for), and am sadly lacking in formal CS training... which is to say that I think there are probably A LOT of bad practices going on in my code, and A LOT that could be optimized. If some of you older Python hands wouldn't mind taking a look at my code and seeing if you can't find any obvious areas for optimization, I would be extremely grateful. You can download the full source code here: http://www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip Compiled exe here: www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip One thing I am concerned about is my event manager, which I feel may have some performance wholes in it, and another thing is my rendering... I'm pretty much just blitting everything to the screen all the time (see the render routines in my game_components.py below); I recently found out that you should only update the areas of the screen that have changed, but I'm still foggy on how that accomplished exactly... could this be a huge performance issue? Any thoughts are much appreciated! As usual, I'm happy to "tip" you for your time and energy via PayPal. Jordan Here are some bits of the source: Main.py #Remote imports import pygame from pygame.locals import * #Local imports import config import rooms from event_manager import * from events import * class RoomController(object): """Controls which room is currently active (eg Title Screen)""" def __init__(self, screen, ev_manager): self.room = None self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.room = self.set_room(config.room) def set_room(self, room_const): #Unregister old room from ev_manager if self.room: self.room.ev_manager.unregister_listener(self.room) self.room = None #Set new room based on const if room_const == config.TITLE_SCREEN: return rooms.TitleScreen(self.screen, self.ev_manager) elif room_const == config.GAME_MODE_ROOM: return rooms.GameModeRoom(self.screen, self.ev_manager) elif room_const == config.GAME_ROOM: return rooms.GameRoom(self.screen, self.ev_manager) elif room_const == config.HIGH_SCORES_ROOM: return rooms.HighScoresRoom(self.screen, self.ev_manager) def notify(self, event): if isinstance(event, ChangeRoomRequest): if event.game_mode: config.game_mode = event.game_mode self.room = self.set_room(event.new_room) def render(self, surface): self.room.render(surface) #Run game def main(): pygame.init() screen = pygame.display.set_mode(config.screen_size) ev_manager = EventManager() spinner = CPUSpinnerController(ev_manager) room_controller = RoomController(screen, ev_manager) pygame_event_controller = PyGameEventController(ev_manager) spinner.run() # this runs the main function if this script is called to run. # If it is imported as a module, we don't run the main function. if __name__ == "__main__": main() event_manager.py #Remote imports import pygame from pygame.locals import * #Local imports import config from events import * def debug( msg ): print "Debug Message: " + str(msg) class EventManager: #This object is responsible for coordinating most communication #between the Model, View, and Controller. def __init__(self): from weakref import WeakKeyDictionary self.listeners = WeakKeyDictionary() self.eventQueue= [] self.gui_app = None #---------------------------------------------------------------------- def register_listener(self, listener): self.listeners[listener] = 1 #---------------------------------------------------------------------- def unregister_listener(self, listener): if listener in self.listeners: del self.listeners[listener] #---------------------------------------------------------------------- def post(self, event): if isinstance(event, MouseButtonLeftEvent): debug(event.name) #NOTE: copying the list like this before iterating over it, EVERY tick, is highly inefficient, #but currently has to be done because of how new listeners are added to the queue while it is running #(eg when popping cards from a deck). Should be changed. See: http://dr0id.homepage.bluewin.ch/pygame_tutorial08.html #and search for "Watch the iteration" for listener in list(self.listeners): #NOTE: If the weakref has died, it will be #automatically removed, so we don't have #to worry about it. listener.notify(event) #------------------------------------------------------------------------------ class PyGameEventController: """...""" def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.input_freeze = False #---------------------------------------------------------------------- def notify(self, incoming_event): if isinstance(incoming_event, UserInputFreeze): self.input_freeze = True elif isinstance(incoming_event, UserInputUnFreeze): self.input_freeze = False elif isinstance(incoming_event, TickEvent): #Share some time with other processes, so we don't hog the cpu pygame.time.wait(5) #Handle Pygame Events for event in pygame.event.get(): #If this event manager has an associated PGU GUI app, notify it of the event if self.ev_manager.gui_app: self.ev_manager.gui_app.event(event) #Standard event handling for everything else ev = None if event.type == QUIT: ev = QuitEvent() elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 1: #Button 1 pos = pygame.mouse.get_pos() ev = MouseButtonLeftEvent(pos) elif event.type == pygame.MOUSEMOTION: pos = pygame.mouse.get_pos() ev = MouseMoveEvent(pos) #Post event to event manager if ev: self.ev_manager.post(ev) #------------------------------------------------------------------------------ class CPUSpinnerController: def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.clock = pygame.time.Clock() self.cumu_time = 0 self.keep_going = True #---------------------------------------------------------------------- def run(self): if not self.keep_going: raise Exception('dead spinner') while self.keep_going: time_passed = self.clock.tick() fps = self.clock.get_fps() self.cumu_time += time_passed self.ev_manager.post(TickEvent(time_passed, fps)) if self.cumu_time >= 1000: self.cumu_time = 0 self.ev_manager.post(SecondEvent()) pygame.quit() #---------------------------------------------------------------------- def notify(self, event): if isinstance(event, QuitEvent): #this will stop the while loop from running self.keep_going = False rooms.py #Remote imports import pygame #Local imports import config import continents from game_components import * from my_gui import * from pgu import high class Room(object): def __init__(self, screen, ev_manager): self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) def notify(self, event): if isinstance(event, TickEvent): pygame.display.set_caption('FPS: ' + str(int(event.fps))) self.render(self.screen) pygame.display.update() def get_highs_table(self): fname = 'high_scores.txt' highs_table = None config.all_highs = high.Highs(fname) if config.game_mode == config.TIME_CHALLENGE: if config.difficulty == config.EASY: highs_table = config.all_highs['time_challenge_easy'] if config.difficulty == config.MED_DIF: highs_table = config.all_highs['time_challenge_med'] if config.difficulty == config.HARD: highs_table = config.all_highs['time_challenge_hard'] if config.difficulty == config.SUPER: highs_table = config.all_highs['time_challenge_super'] elif config.game_mode == config.PLAN_AHEAD: pass return highs_table class TitleScreen(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() #Initialize #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=0) #Quit Button #--------------------------------------- b = StartGameButton(ev_manager=self.ev_manager) c.add(b, 0, 0) self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) class GameModeRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() self.create_gui() #Create pgu gui elements def create_gui(self): #Setup #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=-1) #Mode Relaxed Button #--------------------------------------- b = GameModeRelaxedButton(ev_manager=self.ev_manager) self.b = b print b.rect c.add(b, 0, 200) #Mode Time Challenge Button #--------------------------------------- b = TimeChallengeButton(ev_manager=self.ev_manager) self.b = b print b.rect c.add(b, 0, 250) #Mode Think Ahead Button #--------------------------------------- # b = PlanAheadButton(ev_manager=self.ev_manager) # self.b = b # print b.rect # c.add(b, 0, 300) #Initialize #--------------------------------------- self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) class GameRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) #Game mode #--------------------------------------- self.new_board_timer = None self.game_mode = config.game_mode config.current_highs = self.get_highs_table() self.highs_dialog = None self.game_over = False #Images #--------------------------------------- self.background = pygame.image.load('assets/images/interface/game screen2-1.jpg').convert() self.logo = pygame.image.load('assets/images/interface/logo_small.png').convert_alpha() self.game_over_text = pygame.image.load('assets/images/interface/text_game_over.png').convert_alpha() self.trip_complete_text = pygame.image.load('assets/images/interface/text_trip_complete.png').convert_alpha() self.zoom_game_over = None self.zoom_trip_complete = None self.fade_out = None #Text #--------------------------------------- self.font = pygame.font.Font(config.font_sans, config.interface_font_size) #Create game components #--------------------------------------- self.continent = self.set_continent(config.continent) self.board = Board(config.board_size, self.ev_manager) self.deck = Deck(self.ev_manager, self.continent) self.map = Map(self.continent) self.longest_trip = 0 #Set pos of game components #--------------------------------------- board_pos = (SCREEN_MARGIN[0], 109) self.board.set_pos(board_pos) map_pos = (config.screen_size[0] - self.map.size[0] - SCREEN_MARGIN[0], 57); self.map.set_pos(map_pos) #Trackers #--------------------------------------- self.game_clock = Chrono(self.ev_manager) self.swap_counter = 0 self.level = 0 #Create gui #--------------------------------------- self.create_gui() #Create initial board #--------------------------------------- self.new_board = self.deck.deal_new_board(self.board) self.ev_manager.post(NewBoardComplete(self.new_board)) def set_continent(self, continent_const): #Set continent based on const if continent_const == config.EUROPE: return continents.Europe() if continent_const == config.AFRICA: return continents.Africa() else: raise Exception('Continent constant not recognized') #Create pgu gui elements def create_gui(self): #Setup #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=-1,valign=-1) #Timer Progress bar #--------------------------------------- self.timer_bar = None self.time_increase = None self.minutes_left = None self.seconds_left = None self.timer_text = None if self.game_mode == config.TIME_CHALLENGE: self.time_increase = config.time_challenge_start_time self.timer_bar = gui.ProgressBar(config.time_challenge_start_time,0,config.max_time_bank,width=306) c.add(self.timer_bar, 172, 57) #Connections Progress bar #--------------------------------------- self.connections_bar = None self.connections_bar = gui.ProgressBar(0,0,config.longest_trip_needed,width=306) c.add(self.connections_bar, 172, 83) #Quit Button #--------------------------------------- b = QuitButton(ev_manager=self.ev_manager) c.add(b, 950, 20) #Generate Board Button #--------------------------------------- b = GenerateBoardButton(ev_manager=self.ev_manager, room=self) c.add(b, 500, 20) #Board Size? #--------------------------------------- bs = SetBoardSizeContainer(config.BOARD_LARGE, ev_manager=self.ev_manager, board=self.board) c.add(bs, 640, 20) #Fill Board? #--------------------------------------- t = FillBoardCheckbox(config.fill_board, ev_manager=self.ev_manager) c.add(t, 740, 20) #Darkness? #--------------------------------------- t = UseDarknessCheckbox(config.use_darkness, ev_manager=self.ev_manager) c.add(t, 840, 20) #Initialize #--------------------------------------- self.gui_app.init(c) def advance_level(self): self.level += 1 print 'Advancing to next level' print 'New level: ' + str(self.level) if self.timer_bar: print 'Time increase: ' + str(self.time_increase) self.timer_bar.value += self.time_increase self.time_increase = max(config.min_advance_time, int(self.time_increase * 0.9)) self.board = self.new_board self.new_board = None self.zoom_trip_complete = None self.game_clock.unpause() def notify(self, event): #Tick event if isinstance(event, TickEvent): pygame.display.set_caption('FPS: ' + str(int(event.fps))) self.render(self.screen) pygame.display.update() #Wait to deal new board when advancing levels if self.zoom_trip_complete and self.zoom_trip_complete.finished: self.zoom_trip_complete = None self.ev_manager.post(UnfreezeCards()) self.new_board = self.deck.deal_new_board(self.board) self.ev_manager.post(NewBoardComplete(self.new_board)) #New high score? if self.zoom_game_over and self.zoom_game_over.finished and not self.highs_dialog: if config.current_highs.check(self.level) != None: self.zoom_game_over.visible = False data = 'time:' + str(self.game_clock.time) + ',swaps:' + str(self.swap_counter) self.highs_dialog = HighScoreDialog(score=self.level, data=data, ev_manager=self.ev_manager) self.highs_dialog.open() elif not self.fade_out: self.fade_out = FadeOut(self.ev_manager, config.TITLE_SCREEN) #Second event elif isinstance(event, SecondEvent): if self.timer_bar: if not self.game_clock.paused: self.timer_bar.value -= 1 if self.timer_bar.value <= 0 and not self.game_over: self.ev_manager.post(GameOver()) self.minutes_left = self.timer_bar.value / 60 self.seconds_left = self.timer_bar.value % 60 if self.seconds_left < 10: leading_zero = '0' else: leading_zero = '' self.timer_text = ''.join(['Time Left: ', str(self.minutes_left), ':', leading_zero, str(self.seconds_left)]) #Game over elif isinstance(event, GameOver): self.game_over = True self.zoom_game_over = ZoomImage(self.ev_manager, self.game_over_text) #Trip complete event elif isinstance(event, TripComplete): print 'You did it!' self.game_clock.pause() self.zoom_trip_complete = ZoomImage(self.ev_manager, self.trip_complete_text) self.new_board_timer = Timer(self.ev_manager, 2) self.ev_manager.post(FreezeCards()) print 'Room posted newboardcomplete' #Board Refresh Complete elif isinstance(event, BoardRefreshComplete): if event.board == self.board: print 'Longest trip needed: ' + str(config.longest_trip_needed) print 'Your longest trip: ' + str(self.board.longest_trip) if self.board.longest_trip >= config.longest_trip_needed: self.ev_manager.post(TripComplete()) elif event.board == self.new_board: self.advance_level() self.connections_bar.value = self.board.longest_trip self.connection_text = ' '.join(['Connections:', str(self.board.longest_trip), '/', str(config.longest_trip_needed)]) #CardSwapComplete elif isinstance(event, CardSwapComplete): self.swap_counter += 1 elif isinstance(event, ConfigChangeBoardSize): config.board_size = event.new_size elif isinstance(event, ConfigChangeCardSize): config.card_size = event.new_size elif isinstance(event, ConfigChangeFillBoard): config.fill_board = event.new_value elif isinstance(event, ConfigChangeDarkness): config.use_darkness = event.new_value def render(self, surface): #Background surface.blit(self.background, (0, 0)) #Map self.map.render(surface) #Board self.board.render(surface) #Logo surface.blit(self.logo, (10,10)) #Text connection_text = self.font.render(self.connection_text, True, BLACK) surface.blit(connection_text, (25, 84)) if self.timer_text: timer_text = self.font.render(self.timer_text, True, BLACK) surface.blit(timer_text, (25, 64)) #GUI self.gui_app.paint(surface) if self.zoom_trip_complete: self.zoom_trip_complete.render(surface) if self.zoom_game_over: self.zoom_game_over.render(surface) if self.fade_out: self.fade_out.render(surface) class HighScoresRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() #Initialize #--------------------------------------- self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=0) #High Scores Table #--------------------------------------- hst = HighScoresTable() c.add(hst, 0, 0) self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) game_components.py #Remote imports import pygame from pygame.locals import * import random import operator from copy import copy from math import sqrt, floor #Local imports import config from events import * from matrix import Matrix from textrect import render_textrect, TextRectException from hyphen import hyphenator from textwrap2 import TextWrapper ############################## #CONSTANTS ############################## SCREEN_MARGIN = (10, 10) #Colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) YELLOW = (255, 200, 0) #Directions LEFT = -1 RIGHT = 1 UP = 2 DOWN = -2 #Cards CARD_MARGIN = (10, 10) CARD_PADDING = (2, 2) #Card types BLANK = 0 COUNTRY = 1 TRANSPORT = 2 #Transport types PLANE = 0 TRAIN = 1 CAR = 2 SHIP = 3 class Timer(object): def __init__(self, ev_manager, time_left): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time_left = time_left self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time_left -= 1 class Chrono(object): def __init__(self, ev_manager, start_time=0): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time = start_time self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time += 1 class Map(object): def __init__(self, continent): self.map_image = pygame.image.load(continent.map).convert_alpha() self.map_text = pygame.image.load(continent.map_text).convert_alpha() self.pos = (0, 0) self.set_color() self.map_image = pygame.transform.smoothscale(self.map_image, config.map_size) self.size = self.map_image.get_size() def set_pos(self, pos): self.pos = pos def set_color(self): image_pixel_array = pygame.PixelArray(self.map_image) image_pixel_array.replace(config.GRAY1, config.COLOR1) image_pixel_array.replace(config.GRAY2, config.COLOR2) image_pixel_array.replace(config.GRAY3, config.COLOR3) image_pixel_array.replace(config.GRAY4, config.COLOR4) image_pixel_array.replace(config.GRAY5, config.COLOR5)

    Read the article

  • java.lang.IllegalAccessException during Ant jwsc webservice build

    - by KevB
    Hi. I have a large application, part of which relies on a set of 3 webservices. I'm currently in the process of writing an Ant build script to build and package the application into an EAR file. When building the web sub-project for this application I use the <jwsc> task in Ant to compile the webservices. This causes an IllegalAccessException, as outlined in the stack trace below: [jwsc] warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [jwsc] JWS: processing module weboutput [jwsc] Parsing source files [jwsc] Parsing source files [jwsc] 3 JWS files being processed for module weboutput [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\DailyRun.java Validated. [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\PendingRegistrationsSweep.java Validated. [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\RegistrationsGoLive.java Validated. [jwsc] Compiling 6 source files to C:\DOCUME~1\KEVIN~1.BRE\LOCALS~1\Temp\_5l950r [jwsc] An exception has occurred in the compiler (1.6.0_23). Please file a bug at the Java Developer Connection (http://java.sun.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. [jwsc] java.lang.IllegalAccessError: tried to access class com.sun.tools.javac.jvm.ClassReader$AnnotationDefaultCompleter from class com.sun.tools.javac.jvm.ClassReader [jwsc] at com.sun.tools.javac.jvm.ClassReader.attachAnnotationDefault(ClassReader.java:1128) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMemberAttr(ClassReader.java:906) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMemberAttrs(ClassReader.java:1027) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMethod(ClassReader.java:1490) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readClass(ClassReader.java:1586) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readClassFile(ClassReader.java:1658) [jwsc] at com.sun.tools.javac.jvm.ClassReader.fillIn(ClassReader.java:1845) [jwsc] at com.sun.tools.javac.jvm.ClassReader.complete(ClassReader.java:1777) [jwsc] at com.sun.tools.javac.code.Symbol.complete(Symbol.java:386) [jwsc] at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:763) [jwsc] at com.sun.tools.javac.jvm.ClassReader.loadClass(ClassReader.java:1951) [jwsc] at com.sun.tools.javac.comp.Resolve.loadClass(Resolve.java:842) [jwsc] at com.sun.tools.javac.comp.Resolve.findIdentInPackage(Resolve.java:1011) [jwsc] at com.sun.tools.javac.comp.Attr.selectSym(Attr.java:1921) [jwsc] at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:1835) [jwsc] at com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1522) [jwsc] at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:360) [jwsc] at com.sun.tools.javac.comp.Attr.attribType(Attr.java:390) [jwsc] at com.sun.tools.javac.comp.MemberEnter.attribImportType(MemberEnter.java:681) [jwsc] at com.sun.tools.javac.comp.MemberEnter.visitImport(MemberEnter.java:545) [jwsc] at com.sun.tools.javac.tree.JCTree$JCImport.accept(JCTree.java:495) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:387) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:399) [jwsc] at com.sun.tools.javac.comp.MemberEnter.visitTopLevel(MemberEnter.java:512) [jwsc] at com.sun.tools.javac.tree.JCTree$JCCompilationUnit.accept(JCTree.java:446) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:387) [jwsc] at com.sun.tools.javac.comp.MemberEnter.complete(MemberEnter.java:819) [jwsc] at com.sun.tools.javac.code.Symbol.complete(Symbol.java:386) [jwsc] at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:763) [jwsc] at com.sun.tools.javac.comp.Enter.complete(Enter.java:464) [jwsc] at com.sun.tools.javac.comp.Enter.main(Enter.java:442) [jwsc] at com.sun.tools.javac.main.JavaCompiler.enterTrees(JavaCompiler.java:819) [jwsc] at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:727) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:353) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:279) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:270) [jwsc] at com.sun.tools.javac.Main.compile(Main.java:69) [jwsc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [jwsc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:56) [jwsc] at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1097) [jwsc] at weblogic.wsee.tools.anttasks.DelegatingJavacTask$ExposingJavac.compile(DelegatingJavacTask.java:343) [jwsc] at weblogic.wsee.tools.anttasks.DelegatingJavacTask.compile(DelegatingJavacTask.java:286) [jwsc] at weblogic.wsee.tools.anttasks.JwscTask.javac(JwscTask.java:335) [jwsc] at weblogic.wsee.tools.anttasks.JwsModule.compile(JwsModule.java:390) [jwsc] at weblogic.wsee.tools.anttasks.JwsModule.build(JwsModule.java:262) [jwsc] at weblogic.wsee.tools.anttasks.JwscTask.execute(JwscTask.java:227) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442) [jwsc] at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at com.bea.workshop.cmdline.antlib.AntExTask.execute(AntExTask.java:406) [jwsc] at com.bea.workshop.cmdline.antlib.AntCallExTask.execute(AntCallExTask.java:118) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at com.bea.workshop.cmdline.antlib.AntExTask.execute(AntExTask.java:406) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:217) [jwsc] at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) [jwsc] at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) [jwsc] at net.sf.antcontrib.logic.ForTask.doSequentialIteration(ForTask.java:259) [jwsc] at net.sf.antcontrib.logic.ForTask.doToken(ForTask.java:268) [jwsc] at net.sf.antcontrib.logic.ForTask.doTheTasks(ForTask.java:299) [jwsc] at net.sf.antcontrib.logic.ForTask.execute(ForTask.java:244) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442) [jwsc] at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.Main.runBuild(Main.java:801) [jwsc] at org.apache.tools.ant.Main.startAnt(Main.java:218) [jwsc] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) [jwsc] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109) [AntUtil.deleteDir] Deleting directory C:\DOCUME~1\KEVIN~1.BRE\LOCALS~1\Temp_5l950r The Ant target that uses the <jwsc> task is this: <target name="webservice.build" depends="init,generated.root.init"> <path id="jwsc.srcpath"> <path path="${java.sourcepath}" /> <pathelement path="build/assembly/.src" /> </path> <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" > <classpath> <path refid="weblogic.jar.classpath" /> </classpath> </taskdef> <property name="jwsc.module.root" value="${project.dir}/build/weboutput"/> <property name="jwsc.contextpath" value="irWeb"/> <property name="jwsc.srcpath.prop" refid="jwsc.srcpath"/> <path id="jwsc.classpath"> <path refid="weblogic.jar.classpath" /> <path refid="java.classpath" /> <pathelement path="${java.outpath}" /> </path> <jwsc destdir="${project.dir}/build" classpathref="jwsc.classpath"> <module name="weboutput" explode="true" contextPath="${jwsc.contextpath}" > <jwsFileSet srcdir="${webservices.dir}" type="JAXRPC"> <include name="**/*.java"/> </jwsFileSet> <descriptor file="${jwsc.module.root}/WEB-INF/web.xml" /> <descriptor file="${jwsc.module.root}/WEB-INF/weblogic.xml" /> </module> </jwsc> </target> I have no idea what could be causing the compiler to throw this error at build time, and a day of google searching has turned up other instances of this error caused by different triggers, and solutions for those propblems didn't work for me. I also found a single report on the Oracle forums that seemed to be a carbon copy of this issue, but there were no replies. The application is written in Weblogic Workshop 10, runs on Weblogic Server 10.3, and uses Beehive / NetUI. Not sure if that would make a difference or not though. The build scripts were automatically generated by Weblogic Workshop, with some tweaks and fixes made to other aspects of the files by myself to fix other compatability issues. I am using Java 1.6.0_23 from Sun, and Ant 1.8.1 Any help or advice would be greatly appreciated.

    Read the article

  • Can't build pyxpcom on OS X 10.6

    - by Gj
    I've been following these instructions at https://developer.mozilla.org/en/Building_PyXPCOM but getting this: $ make make export make[2]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../src/config/nsinstall.py -L /usr/local/pyxpcom/build/xpcom/src -m 644 ../../../src/xpcom/src/PyXPCOM.h ../../dist/include make[3]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl make[4]: *** No rule to make target `_xpidlgen/py_test_component.h', needed by `export'. Stop. make[3]: *** [export] Error 2 make[2]: *** [export] Error 2 make[1]: *** [export] Error 2 make: *** [default] Error 2 Any ideas? An interesting anomaly is that despite me setting the PYTHON env variable to Python 2.6, the configure and make both seem to go after the 2.5... Thanks for any advice! PS here's the configure output: $ ../src/configure --with-libxul-sdk=/Users/me/xulrunner-sdk/ loading cache ./config.cache checking host system type... i386-apple-darwin10.3.0 checking target system type... i386-apple-darwin10.3.0 checking build system type... i386-apple-darwin10.3.0 checking for mawk... (cached) gawk checking for perl5... (cached) /opt/local/bin/perl5 checking for gcc... (cached) gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... (cached) yes checking whether gcc accepts -g... (cached) yes checking for c++... (cached) c++ checking whether the C++ compiler (c++ ) works... yes checking whether the C++ compiler (c++ ) is a cross-compiler... no checking whether we are using GNU C++... (cached) yes checking whether c++ accepts -g... (cached) yes checking for ranlib... (cached) ranlib checking for as... (cached) /usr/bin/as checking for ar... (cached) ar checking for ld... (cached) ld checking for strip... (cached) strip checking for windres... no checking whether gcc and cc understand -c and -o together... (cached) yes checking how to run the C preprocessor... (cached) gcc -E checking how to run the C++ preprocessor... (cached) c++ -E checking for a BSD compatible install... (cached) /usr/bin/install -c checking whether ln -s works... (cached) yes checking for minimum required perl version >= 5.006... 5.008009 checking for full perl installation... yes checking for /opt/local/bin/python... (cached) /opt/local/bin/python2.5 checking for doxygen... (cached) : checking for whoami... (cached) /usr/bin/whoami checking for autoconf... (cached) /opt/local/bin/autoconf checking for unzip... (cached) /usr/bin/unzip checking for zip... (cached) /usr/bin/zip checking for makedepend... (cached) /opt/local/bin/makedepend checking for xargs... (cached) /usr/bin/xargs checking for pbbuild... (cached) /usr/bin/xcodebuild checking for sdp... (cached) /usr/bin/sdp checking for gmake... (cached) /opt/local/bin/gmake checking for X... (cached) no checking whether the compiler supports -Wno-invalid-offsetof... yes checking whether ld has archive extraction flags... (cached) no checking that static assertion macros used in autoconf tests work... (cached) yes checking for 64-bit OS... yes checking for minimum required Python version >= 2.4... yes checking for -dead_strip option to ld... yes checking for ANSI C header files... (cached) yes checking for working const... (cached) yes checking for mode_t... (cached) yes checking for off_t... (cached) yes checking for pid_t... (cached) yes checking for size_t... (cached) yes checking for st_blksize in struct stat... (cached) yes checking for siginfo_t... (cached) yes checking for int16_t... (cached) yes checking for int32_t... (cached) yes checking for int64_t... (cached) yes checking for int64... (cached) no checking for uint... (cached) yes checking for uint_t... (cached) no checking for uint16_t... (cached) no checking for uname.domainname... (cached) no checking for uname.__domainname... (cached) no checking for usable char16_t (2 bytes, unsigned)... (cached) no checking for usable wchar_t (2 bytes, unsigned)... (cached) no checking for compiler -fshort-wchar option... (cached) yes checking for visibility(hidden) attribute... (cached) yes checking for visibility(default) attribute... (cached) yes checking for visibility pragma support... (cached) yes checking For gcc visibility bug with class-level attributes (GCC bug 26905)... (cached) yes checking For x86_64 gcc visibility bug with builtins (GCC bug 20297)... (cached) no checking for dirent.h that defines DIR... (cached) yes checking for opendir in -ldir... (cached) no checking for sys/byteorder.h... (cached) no checking for compat.h... (cached) no checking for getopt.h... (cached) yes checking for sys/bitypes.h... (cached) no checking for memory.h... (cached) yes checking for unistd.h... (cached) yes checking for gnu/libc-version.h... (cached) no checking for nl_types.h... (cached) yes checking for malloc.h... (cached) no checking for X11/XKBlib.h... (cached) yes checking for io.h... (cached) no checking for sys/statvfs.h... (cached) yes checking for sys/statfs.h... (cached) no checking for sys/vfs.h... (cached) no checking for sys/mount.h... (cached) yes checking for sys/quota.h... (cached) yes checking for mmintrin.h... (cached) yes checking for new... (cached) yes checking for sys/cdefs.h... (cached) yes checking for gethostbyname_r in -lc_r... (cached) no checking for dladdr... (cached) yes checking for socket in -lsocket... (cached) no checking whether mmap() sees write()s... yes checking whether gcc needs -traditional... (cached) no checking for 8-bit clean memcmp... (cached) yes checking for random... (cached) yes checking for strerror... (cached) yes checking for lchown... (cached) yes checking for fchmod... (cached) yes checking for snprintf... (cached) yes checking for statvfs... (cached) yes checking for memmove... (cached) yes checking for rint... (cached) yes checking for stat64... (cached) yes checking for lstat64... (cached) yes checking for truncate64... (cached) no checking for statvfs64... (cached) no checking for setbuf... (cached) yes checking for isatty... (cached) yes checking for flockfile... (cached) yes checking for getpagesize... (cached) yes checking for localtime_r... (cached) yes checking for strtok_r... (cached) yes checking for wcrtomb... (cached) yes checking for mbrtowc... (cached) yes checking for res_ninit()... (cached) no checking for gnu_get_libc_version()... (cached) no ../src/configure: line 9881: AM_LANGINFO_CODESET: command not found checking for an implementation of va_copy()... (cached) yes checking for an implementation of __va_copy()... (cached) yes checking whether va_lists can be copied by value... (cached) no checking for C++ exceptions flag... (cached) -fno-exceptions checking for gcc 3.0 ABI... (cached) yes checking for C++ "explicit" keyword... (cached) yes checking for C++ "typename" keyword... (cached) yes checking for modern C++ template specialization syntax support... (cached) yes checking whether partial template specialization works... (cached) yes checking whether operators must be re-defined for templates derived from templates... (cached) no checking whether we need to cast a derived template to pass as its base class... (cached) no checking whether the compiler can resolve const ambiguities for templates... (cached) yes checking whether the C++ "using" keyword can change access... (cached) yes checking whether the C++ "using" keyword resolves ambiguity... (cached) yes checking for "std::" namespace... (cached) yes checking whether standard template operator!=() is ambiguous... (cached) unambiguous checking for C++ reinterpret_cast... (cached) yes checking for C++ dynamic_cast to void*... (cached) yes checking whether C++ requires implementation of unused virtual methods... (cached) yes checking for trouble comparing to zero near std::operator!=()... (cached) no checking for LC_MESSAGES... (cached) yes checking for tar archiver... checking for gnutar... (cached) gnutar gnutar checking for wget... checking for wget... (cached) wget wget checking for valid optimization flags... yes checking for gcc -pipe support... yes checking whether compiler supports -Wno-long-long... yes checking whether C compiler supports -fprofile-generate... yes checking for correct temporary object destruction order... yes checking for correct overload resolution with const and templates... no Building Python extensions using python-2.5 from /opt/local/Library/Frameworks/Python.framework/Versions/2.5 creating ./config.status creating config/autoconf.mk creating Makefile creating xpcom/Makefile creating xpcom/src/Makefile creating xpcom/src/loader/Makefile creating xpcom/src/module/Makefile creating xpcom/components/Makefile creating xpcom/test/Makefile creating xpcom/test/test_component/Makefile creating dom/Makefile creating dom/src/Makefile creating dom/test/Makefile creating dom/test/pyxultest/Makefile creating dom/nsdom/Makefile creating dom/nsdom/test/Makefile

    Read the article

  • Trying to find USB device on iphone with IOKit.framework

    - by HuGeek
    Hi all, i'm working on a project were i need the usb port to communicate with a external device. I have been looking for exemple on the net (Apple and /developer/IOKit/usb exemple) and trying some other but i can't even find the device. In my code i blocking at the place where the fucntion looks for a next iterator (pointer in fact) with the function getNextIterator but never returns a good value so the code is blocking. By the way i am using toolchain and added IOKit.framework in my project. All i what right now is the communicate or do like a ping to someone on the USB bus!! I blocking in the 'FindDevice'....i can't manage to enter in the while because the variable usbDevice is always = to 0....i have tested my code in a small mac program and it works... Thanks Here is my code : IOReturn ConfigureDevice(IOUSBDeviceInterface **dev) { UInt8 numConfig; IOReturn result; IOUSBConfigurationDescriptorPtr configDesc; //Get the number of configurations result = (*dev)->GetNumberOfConfigurations(dev, &numConfig); if (!numConfig) { return -1; } // Get the configuration descriptor result = (*dev)->GetConfigurationDescriptorPtr(dev, 0, &configDesc); if (result) { NSLog(@"Couldn't get configuration descriptior for index %d (err=%08x)\n", 0, result); return -1; } ifdef OSX_DEBUG NSLog(@"Number of Configurations: %d\n", numConfig); endif // Configure the device result = (*dev)->SetConfiguration(dev, configDesc->bConfigurationValue); if (result) { NSLog(@"Unable to set configuration to value %d (err=%08x)\n", 0, result); return -1; } return kIOReturnSuccess; } IOReturn FindInterfaces(IOUSBDeviceInterface *dev, IOUSBInterfaceInterface **itf) { IOReturn kr; IOUSBFindInterfaceRequest request; io_iterator_t iterator; io_service_t usbInterface; IOUSBInterfaceInterface **intf = NULL; IOCFPlugInInterface **plugInInterface = NULL; HRESULT res; SInt32 score; UInt8 intfClass; UInt8 intfSubClass; UInt8 intfNumEndpoints; int pipeRef; CFRunLoopSourceRef runLoopSource; NSLog(@"Debut FindInterfaces \n"); request.bInterfaceClass = kIOUSBFindInterfaceDontCare; request.bInterfaceSubClass = kIOUSBFindInterfaceDontCare; request.bInterfaceProtocol = kIOUSBFindInterfaceDontCare; request.bAlternateSetting = kIOUSBFindInterfaceDontCare; kr = (*dev)->CreateInterfaceIterator(dev, &request, &iterator); usbInterface = IOIteratorNext(iterator); IOObjectRelease(iterator); NSLog(@"Interface found.\n"); kr = IOCreatePlugInInterfaceForService(usbInterface, kIOUSBInterfaceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); kr = IOObjectRelease(usbInterface); // done with the usbInterface object now that I have the plugin if ((kIOReturnSuccess != kr) || !plugInInterface) { NSLog(@"unable to create a plugin (%08x)\n", kr); return -1; } // I have the interface plugin. I need the interface interface res = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBInterfaceInterfaceID), (LPVOID*) &intf); (*plugInInterface)->Release(plugInInterface); // done with this if (res || !intf) { NSLog(@"couldn't create an IOUSBInterfaceInterface (%08x)\n", (int) res); return -1; } // Now open the interface. This will cause the pipes to be instantiated that are // associated with the endpoints defined in the interface descriptor. kr = (*intf)->USBInterfaceOpen(intf); if (kIOReturnSuccess != kr) { NSLog(@"unable to open interface (%08x)\n", kr); (void) (*intf)->Release(intf); return -1; } kr = (*intf)->CreateInterfaceAsyncEventSource(intf, &runLoopSource); if (kIOReturnSuccess != kr) { NSLog(@"unable to create async event source (%08x)\n", kr); (void) (*intf)->USBInterfaceClose(intf); (void) (*intf)->Release(intf); return -1; } CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopDefaultMode); if (!intf) { NSLog(@"Interface is NULL!\n"); } else { *itf = intf; } NSLog(@"End of FindInterface \n \n"); return kr; } unsigned int FindDevice(void *refCon, io_iterator_t iterator) { kern_return_t kr; io_service_t usbDevice; IOCFPlugInInterface **plugInInterface = NULL; HRESULT result; SInt32 score; UInt16 vendor; UInt16 product; UInt16 release; unsigned int count = 0; NSLog(@"Searching Device....\n"); while (usbDevice = IOIteratorNext(iterator)) { // create intermediate plug-in NSLog(@"Found a device!\n"); kr = IOCreatePlugInInterfaceForService(usbDevice, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); kr = IOObjectRelease(usbDevice); if ((kIOReturnSuccess != kr) || !plugInInterface) { NSLog(@"Unable to create a plug-in (%08x)\n", kr); continue; } // Now create the device interface result = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID)&dev); // Don't need intermediate Plug-In Interface (*plugInInterface)->Release(plugInInterface); if (result || !dev) { NSLog(@"Couldn't create a device interface (%08x)\n", (int)result); continue; } // check these values for confirmation kr = (*dev)->GetDeviceVendor(dev, &vendor); kr = (*dev)->GetDeviceProduct(dev, &product); //kr = (*dev)->GetDeviceReleaseNumber(dev, &release); //if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID) || (release != LegoUSBRelease)) { if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID)) { NSLog(@"Found unwanted device (vendor = %d != %d, product = %d != %d, release = %d)\n", vendor, kUSBVendorID, product, LegoUSBProductID, release); (void) (*dev)-Release(dev); continue; } // Open the device to change its state kr = (*dev)->USBDeviceOpen(dev); if (kr == kIOReturnSuccess) { count++; } else { NSLog(@"Unable to open device: %08x\n", kr); (void) (*dev)->Release(dev); continue; } // Configure device kr = ConfigureDevice(dev); if (kr != kIOReturnSuccess) { NSLog(@"Unable to configure device: %08x\n", kr); (void) (*dev)->USBDeviceClose(dev); (void) (*dev)->Release(dev); continue; } break; } return count; } // USB rcx Init IOUSBInterfaceInterface** osx_usb_rcx_init (void) { CFMutableDictionaryRef matchingDict; kern_return_t result; IOUSBInterfaceInterface **intf = NULL; unsigned int device_count = 0; // Create master handler result = IOMasterPort(MACH_PORT_NULL, &gMasterPort); if (result || !gMasterPort) { NSLog(@"ERR: Couldn't create master I/O Kit port(%08x)\n", result); return NULL; } else { NSLog(@"Created Master Port.\n"); NSLog(@"Master port 0x:08X \n \n", gMasterPort); } // Set up the matching dictionary for class IOUSBDevice and its subclasses matchingDict = IOServiceMatching(kIOUSBDeviceClassName); if (!matchingDict) { NSLog(@"Couldn't create a USB matching dictionary \n"); mach_port_deallocate(mach_task_self(), gMasterPort); return NULL; } else { NSLog(@"USB matching dictionary : %08X \n", matchingDict); } CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBVendorID)); CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBProductID)); result = IOServiceGetMatchingServices(gMasterPort, matchingDict, &gRawAddedIter); matchingDict = 0; // this was consumed by the above call // Iterate over matching devices to access already present devices NSLog(@"RawAddedIter : 0x:%08X \n", &gRawAddedIter); device_count = FindDevice(NULL, gRawAddedIter); if (device_count == 1) { result = FindInterfaces(dev, &intf); if (kIOReturnSuccess != result) { NSLog(@"unable to find interfaces on device: %08x\n", result); (*dev)-USBDeviceClose(dev); (*dev)-Release(dev); return NULL; } // osx_usb_rcx_wakeup(intf); return intf; } else if (device_count 1) { NSLog(@"too many matching devices (%d) !\n", device_count); } else { NSLog(@"no matching devices found\n"); } return NULL; } int main(int argc, char *argv[]) { int returnCode; NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Debut du programme \n \n"); osx_usb_rcx_init(); NSLog(@"Fin du programme \n \n"); return 0; // returnCode = UIApplicationMain(argc, argv, @"Untitled1App", @"Untitled1App"); // [pool release]; // return returnCode; }

    Read the article

  • Zen and the Art of File and Folder Organization

    - by Mark Virtue
    Is your desk a paragon of neatness, or does it look like a paper-bomb has gone off? If you’ve been putting off getting organized because the task is too huge or daunting, or you don’t know where to start, we’ve got 40 tips to get you on the path to zen mastery of your filing system. For all those readers who would like to get their files and folders organized, or, if they’re already organized, better organized—we have compiled a complete guide to getting organized and staying organized, a comprehensive article that will hopefully cover every possible tip you could want. Signs that Your Computer is Poorly Organized If your computer is a mess, you’re probably already aware of it.  But just in case you’re not, here are some tell-tale signs: Your Desktop has over 40 icons on it “My Documents” contains over 300 files and 60 folders, including MP3s and digital photos You use the Windows’ built-in search facility whenever you need to find a file You can’t find programs in the out-of-control list of programs in your Start Menu You save all your Word documents in one folder, all your spreadsheets in a second folder, etc Any given file that you’re looking for may be in any one of four different sets of folders But before we start, here are some quick notes: We’re going to assume you know what files and folders are, and how to create, save, rename, copy and delete them The organization principles described in this article apply equally to all computer systems.  However, the screenshots here will reflect how things look on Windows (usually Windows 7).  We will also mention some useful features of Windows that can help you get organized. Everyone has their own favorite methodology of organizing and filing, and it’s all too easy to get into “My Way is Better than Your Way” arguments.  The reality is that there is no perfect way of getting things organized.  When I wrote this article, I tried to keep a generalist and objective viewpoint.  I consider myself to be unusually well organized (to the point of obsession, truth be told), and I’ve had 25 years experience in collecting and organizing files on computers.  So I’ve got a lot to say on the subject.  But the tips I have described here are only one way of doing it.  Hopefully some of these tips will work for you too, but please don’t read this as any sort of “right” way to do it. At the end of the article we’ll be asking you, the reader, for your own organization tips. Why Bother Organizing At All? For some, the answer to this question is self-evident. And yet, in this era of powerful desktop search software (the search capabilities built into the Windows Vista and Windows 7 Start Menus, and third-party programs like Google Desktop Search), the question does need to be asked, and answered. I have a friend who puts every file he ever creates, receives or downloads into his My Documents folder and doesn’t bother filing them into subfolders at all.  He relies on the search functionality built into his Windows operating system to help him find whatever he’s looking for.  And he always finds it.  He’s a Search Samurai.  For him, filing is a waste of valuable time that could be spent enjoying life! It’s tempting to follow suit.  On the face of it, why would anyone bother to take the time to organize their hard disk when such excellent search software is available?  Well, if all you ever want to do with the files you own is to locate and open them individually (for listening, editing, etc), then there’s no reason to ever bother doing one scrap of organization.  But consider these common tasks that are not achievable with desktop search software: Find files manually.  Often it’s not convenient, speedy or even possible to utilize your desktop search software to find what you want.  It doesn’t work 100% of the time, or you may not even have it installed.  Sometimes its just plain faster to go straight to the file you want, if you know it’s in a particular sub-folder, rather than trawling through hundreds of search results. Find groups of similar files (e.g. all your “work” files, all the photos of your Europe holiday in 2008, all your music videos, all the MP3s from Dark Side of the Moon, all your letters you wrote to your wife, all your tax returns).  Clever naming of the files will only get you so far.  Sometimes it’s the date the file was created that’s important, other times it’s the file format, and other times it’s the purpose of the file.  How do you name a collection of files so that they’re easy to isolate based on any of the above criteria?  Short answer, you can’t. Move files to a new computer.  It’s time to upgrade your computer.  How do you quickly grab all the files that are important to you?  Or you decide to have two computers now – one for home and one for work.  How do you quickly isolate only the work-related files to move them to the work computer? Synchronize files to other computers.  If you have more than one computer, and you need to mirror some of your files onto the other computer (e.g. your music collection), then you need a way to quickly determine which files are to be synced and which are not.  Surely you don’t want to synchronize everything? Choose which files to back up.  If your backup regime calls for multiple backups, or requires speedy backups, then you’ll need to be able to specify which files are to be backed up, and which are not.  This is not possible if they’re all in the same folder. Finally, if you’re simply someone who takes pleasure in being organized, tidy and ordered (me! me!), then you don’t even need a reason.  Being disorganized is simply unthinkable. Tips on Getting Organized Here we present our 40 best tips on how to get organized.  Or, if you’re already organized, to get better organized. Tip #1.  Choose Your Organization System Carefully The reason that most people are not organized is that it takes time.  And the first thing that takes time is deciding upon a system of organization.  This is always a matter of personal preference, and is not something that a geek on a website can tell you.  You should always choose your own system, based on how your own brain is organized (which makes the assumption that your brain is, in fact, organized). We can’t instruct you, but we can make suggestions: You may want to start off with a system based on the users of the computer.  i.e. “My Files”, “My Wife’s Files”, My Son’s Files”, etc.  Inside “My Files”, you might then break it down into “Personal” and “Business”.  You may then realize that there are overlaps.  For example, everyone may want to share access to the music library, or the photos from the school play.  So you may create another folder called “Family”, for the “common” files. You may decide that the highest-level breakdown of your files is based on the “source” of each file.  In other words, who created the files.  You could have “Files created by ME (business or personal)”, “Files created by people I know (family, friends, etc)”, and finally “Files created by the rest of the world (MP3 music files, downloaded or ripped movies or TV shows, software installation files, gorgeous desktop wallpaper images you’ve collected, etc).”  This system happens to be the one I use myself.  See below:  Mark is for files created by meVC is for files created by my company (Virtual Creations)Others is for files created by my friends and familyData is the rest of the worldAlso, Settings is where I store the configuration files and other program data files for my installed software (more on this in tip #34, below). Each folder will present its own particular set of requirements for further sub-organization.  For example, you may decide to organize your music collection into sub-folders based on the artist’s name, while your digital photos might get organized based on the date they were taken.  It can be different for every sub-folder! Another strategy would be based on “currentness”.  Files you have yet to open and look at live in one folder.  Ones that have been looked at but not yet filed live in another place.  Current, active projects live in yet another place.  All other files (your “archive”, if you like) would live in a fourth folder. (And of course, within that last folder you’d need to create a further sub-system based on one of the previous bullet points). Put some thought into this – changing it when it proves incomplete can be a big hassle!  Before you go to the trouble of implementing any system you come up with, examine a wide cross-section of the files you own and see if they will all be able to find a nice logical place to sit within your system. Tip #2.  When You Decide on Your System, Stick to It! There’s nothing more pointless than going to all the trouble of creating a system and filing all your files, and then whenever you create, receive or download a new file, you simply dump it onto your Desktop.  You need to be disciplined – forever!  Every new file you get, spend those extra few seconds to file it where it belongs!  Otherwise, in just a month or two, you’ll be worse off than before – half your files will be organized and half will be disorganized – and you won’t know which is which! Tip #3.  Choose the Root Folder of Your Structure Carefully Every data file (document, photo, music file, etc) that you create, own or is important to you, no matter where it came from, should be found within one single folder, and that one single folder should be located at the root of your C: drive (as a sub-folder of C:\).  In other words, do not base your folder structure in standard folders like “My Documents”.  If you do, then you’re leaving it up to the operating system engineers to decide what folder structure is best for you.  And every operating system has a different system!  In Windows 7 your files are found in C:\Users\YourName, whilst on Windows XP it was C:\Documents and Settings\YourName\My Documents.  In UNIX systems it’s often /home/YourName. These standard default folders tend to fill up with junk files and folders that are not at all important to you.  “My Documents” is the worst offender.  Every second piece of software you install, it seems, likes to create its own folder in the “My Documents” folder.  These folders usually don’t fit within your organizational structure, so don’t use them!  In fact, don’t even use the “My Documents” folder at all.  Allow it to fill up with junk, and then simply ignore it.  It sounds heretical, but: Don’t ever visit your “My Documents” folder!  Remove your icons/links to “My Documents” and replace them with links to the folders you created and you care about! Create your own file system from scratch!  Probably the best place to put it would be on your D: drive – if you have one.  This way, all your files live on one drive, while all the operating system and software component files live on the C: drive – simply and elegantly separated.  The benefits of that are profound.  Not only are there obvious organizational benefits (see tip #10, below), but when it comes to migrate your data to a new computer, you can (sometimes) simply unplug your D: drive and plug it in as the D: drive of your new computer (this implies that the D: drive is actually a separate physical disk, and not a partition on the same disk as C:).  You also get a slight speed improvement (again, only if your C: and D: drives are on separate physical disks). Warning:  From tip #12, below, you will see that it’s actually a good idea to have exactly the same file system structure – including the drive it’s filed on – on all of the computers you own.  So if you decide to use the D: drive as the storage system for your own files, make sure you are able to use the D: drive on all the computers you own.  If you can’t ensure that, then you can still use a clever geeky trick to store your files on the D: drive, but still access them all via the C: drive (see tip #17, below). If you only have one hard disk (C:), then create a dedicated folder that will contain all your files – something like C:\Files.  The name of the folder is not important, but make it a single, brief word. There are several reasons for this: When creating a backup regime, it’s easy to decide what files should be backed up – they’re all in the one folder! If you ever decide to trade in your computer for a new one, you know exactly which files to migrate You will always know where to begin a search for any file If you synchronize files with other computers, it makes your synchronization routines very simple.   It also causes all your shortcuts to continue to work on the other machines (more about this in tip #24, below). Once you’ve decided where your files should go, then put all your files in there – Everything!  Completely disregard the standard, default folders that are created for you by the operating system (“My Music”, “My Pictures”, etc).  In fact, you can actually relocate many of those folders into your own structure (more about that below, in tip #6). The more completely you get all your data files (documents, photos, music, etc) and all your configuration settings into that one folder, then the easier it will be to perform all of the above tasks. Once this has been done, and all your files live in one folder, all the other folders in C:\ can be thought of as “operating system” folders, and therefore of little day-to-day interest for us. Here’s a screenshot of a nicely organized C: drive, where all user files are located within the \Files folder:   Tip #4.  Use Sub-Folders This would be our simplest and most obvious tip.  It almost goes without saying.  Any organizational system you decide upon (see tip #1) will require that you create sub-folders for your files.  Get used to creating folders on a regular basis. Tip #5.  Don’t be Shy About Depth Create as many levels of sub-folders as you need.  Don’t be scared to do so.  Every time you notice an opportunity to group a set of related files into a sub-folder, do so.  Examples might include:  All the MP3s from one music CD, all the photos from one holiday, or all the documents from one client. It’s perfectly okay to put files into a folder called C:\Files\Me\From Others\Services\WestCo Bank\Statements\2009.  That’s only seven levels deep.  Ten levels is not uncommon.  Of course, it’s possible to take this too far.  If you notice yourself creating a sub-folder to hold only one file, then you’ve probably become a little over-zealous.  On the other hand, if you simply create a structure with only two levels (for example C:\Files\Work) then you really haven’t achieved any level of organization at all (unless you own only six files!).  Your “Work” folder will have become a dumping ground, just like your Desktop was, with most likely hundreds of files in it. Tip #6.  Move the Standard User Folders into Your Own Folder Structure Most operating systems, including Windows, create a set of standard folders for each of its users.  These folders then become the default location for files such as documents, music files, digital photos and downloaded Internet files.  In Windows 7, the full list is shown below: Some of these folders you may never use nor care about (for example, the Favorites folder, if you’re not using Internet Explorer as your browser).  Those ones you can leave where they are.  But you may be using some of the other folders to store files that are important to you.  Even if you’re not using them, Windows will still often treat them as the default storage location for many types of files.  When you go to save a standard file type, it can become annoying to be automatically prompted to save it in a folder that’s not part of your own file structure. But there’s a simple solution:  Move the folders you care about into your own folder structure!  If you do, then the next time you go to save a file of the corresponding type, Windows will prompt you to save it in the new, moved location. Moving the folders is easy.  Simply drag-and-drop them to the new location.  Here’s a screenshot of the default My Music folder being moved to my custom personal folder (Mark): Tip #7.  Name Files and Folders Intelligently This is another one that almost goes without saying, but we’ll say it anyway:  Do not allow files to be created that have meaningless names like Document1.doc, or folders called New Folder (2).  Take that extra 20 seconds and come up with a meaningful name for the file/folder – one that accurately divulges its contents without repeating the entire contents in the name. Tip #8.  Watch Out for Long Filenames Another way to tell if you have not yet created enough depth to your folder hierarchy is that your files often require really long names.  If you need to call a file Johnson Sales Figures March 2009.xls (which might happen to live in the same folder as Abercrombie Budget Report 2008.xls), then you might want to create some sub-folders so that the first file could be simply called March.xls, and living in the Clients\Johnson\Sales Figures\2009 folder. A well-placed file needs only a brief filename! Tip #9.  Use Shortcuts!  Everywhere! This is probably the single most useful and important tip we can offer.  A shortcut allows a file to be in two places at once. Why would you want that?  Well, the file and folder structure of every popular operating system on the market today is hierarchical.  This means that all objects (files and folders) always live within exactly one parent folder.  It’s a bit like a tree.  A tree has branches (folders) and leaves (files).  Each leaf, and each branch, is supported by exactly one parent branch, all the way back to the root of the tree (which, incidentally, is exactly why C:\ is called the “root folder” of the C: drive). That hard disks are structured this way may seem obvious and even necessary, but it’s only one way of organizing data.  There are others:  Relational databases, for example, organize structured data entirely differently.  The main limitation of hierarchical filing structures is that a file can only ever be in one branch of the tree – in only one folder – at a time.  Why is this a problem?  Well, there are two main reasons why this limitation is a problem for computer users: The “correct” place for a file, according to our organizational rationale, is very often a very inconvenient place for that file to be located.  Just because it’s correctly filed doesn’t mean it’s easy to get to.  Your file may be “correctly” buried six levels deep in your sub-folder structure, but you may need regular and speedy access to this file every day.  You could always move it to a more convenient location, but that would mean that you would need to re-file back to its “correct” location it every time you’d finished working on it.  Most unsatisfactory. A file may simply “belong” in two or more different locations within your file structure.  For example, say you’re an accountant and you have just completed the 2009 tax return for John Smith.  It might make sense to you to call this file 2009 Tax Return.doc and file it under Clients\John Smith.  But it may also be important to you to have the 2009 tax returns from all your clients together in the one place.  So you might also want to call the file John Smith.doc and file it under Tax Returns\2009.  The problem is, in a purely hierarchical filing system, you can’t put it in both places.  Grrrrr! Fortunately, Windows (and most other operating systems) offers a way for you to do exactly that:  It’s called a “shortcut” (also known as an “alias” on Macs and a “symbolic link” on UNIX systems).  Shortcuts allow a file to exist in one place, and an icon that represents the file to be created and put anywhere else you please.  In fact, you can create a dozen such icons and scatter them all over your hard disk.  Double-clicking on one of these icons/shortcuts opens up the original file, just as if you had double-clicked on the original file itself. Consider the following two icons: The one on the left is the actual Word document, while the one on the right is a shortcut that represents the Word document.  Double-clicking on either icon will open the same file.  There are two main visual differences between the icons: The shortcut will have a small arrow in the lower-left-hand corner (on Windows, anyway) The shortcut is allowed to have a name that does not include the file extension (the “.docx” part, in this case) You can delete the shortcut at any time without losing any actual data.  The original is still intact.  All you lose is the ability to get to that data from wherever the shortcut was. So why are shortcuts so great?  Because they allow us to easily overcome the main limitation of hierarchical file systems, and put a file in two (or more) places at the same time.  You will always have files that don’t play nice with your organizational rationale, and can’t be filed in only one place.  They demand to exist in two places.  Shortcuts allow this!  Furthermore, they allow you to collect your most often-opened files and folders together in one spot for convenient access.  The cool part is that the original files stay where they are, safe forever in their perfectly organized location. So your collection of most often-opened files can – and should – become a collection of shortcuts! If you’re still not convinced of the utility of shortcuts, consider the following well-known areas of a typical Windows computer: The Start Menu (and all the programs that live within it) The Quick Launch bar (or the Superbar in Windows 7) The “Favorite folders” area in the top-left corner of the Windows Explorer window (in Windows Vista or Windows 7) Your Internet Explorer Favorites or Firefox Bookmarks Each item in each of these areas is a shortcut!  Each of those areas exist for one purpose only:  For convenience – to provide you with a collection of the files and folders you access most often. It should be easy to see by now that shortcuts are designed for one single purpose:  To make accessing your files more convenient.  Each time you double-click on a shortcut, you are saved the hassle of locating the file (or folder, or program, or drive, or control panel icon) that it represents. Shortcuts allow us to invent a golden rule of file and folder organization: “Only ever have one copy of a file – never have two copies of the same file.  Use a shortcut instead” (this rule doesn’t apply to copies created for backup purposes, of course!) There are also lesser rules, like “don’t move a file into your work area – create a shortcut there instead”, and “any time you find yourself frustrated with how long it takes to locate a file, create a shortcut to it and place that shortcut in a convenient location.” So how to we create these massively useful shortcuts?  There are two main ways: “Copy” the original file or folder (click on it and type Ctrl-C, or right-click on it and select Copy):  Then right-click in an empty area of the destination folder (the place where you want the shortcut to go) and select Paste shortcut: Right-drag (drag with the right mouse button) the file from the source folder to the destination folder.  When you let go of the mouse button at the destination folder, a menu pops up: Select Create shortcuts here. Note that when shortcuts are created, they are often named something like Shortcut to Budget Detail.doc (windows XP) or Budget Detail – Shortcut.doc (Windows 7).   If you don’t like those extra words, you can easily rename the shortcuts after they’re created, or you can configure Windows to never insert the extra words in the first place (see our article on how to do this). And of course, you can create shortcuts to folders too, not just to files! Bottom line: Whenever you have a file that you’d like to access from somewhere else (whether it’s convenience you’re after, or because the file simply belongs in two places), create a shortcut to the original file in the new location. Tip #10.  Separate Application Files from Data Files Any digital organization guru will drum this rule into you.  Application files are the components of the software you’ve installed (e.g. Microsoft Word, Adobe Photoshop or Internet Explorer).  Data files are the files that you’ve created for yourself using that software (e.g. Word Documents, digital photos, emails or playlists). Software gets installed, uninstalled and upgraded all the time.  Hopefully you always have the original installation media (or downloaded set-up file) kept somewhere safe, and can thus reinstall your software at any time.  This means that the software component files are of little importance.  Whereas the files you have created with that software is, by definition, important.  It’s a good rule to always separate unimportant files from important files. So when your software prompts you to save a file you’ve just created, take a moment and check out where it’s suggesting that you save the file.  If it’s suggesting that you save the file into the same folder as the software itself, then definitely don’t follow that suggestion.  File it in your own folder!  In fact, see if you can find the program’s configuration option that determines where files are saved by default (if it has one), and change it. Tip #11.  Organize Files Based on Purpose, Not on File Type If you have, for example a folder called Work\Clients\Johnson, and within that folder you have two sub-folders, Word Documents and Spreadsheets (in other words, you’re separating “.doc” files from “.xls” files), then chances are that you’re not optimally organized.  It makes little sense to organize your files based on the program that created them.  Instead, create your sub-folders based on the purpose of the file.  For example, it would make more sense to create sub-folders called Correspondence and Financials.  It may well be that all the files in a given sub-folder are of the same file-type, but this should be more of a coincidence and less of a design feature of your organization system. Tip #12.  Maintain the Same Folder Structure on All Your Computers In other words, whatever organizational system you create, apply it to every computer that you can.  There are several benefits to this: There’s less to remember.  No matter where you are, you always know where to look for your files If you copy or synchronize files from one computer to another, then setting up the synchronization job becomes very simple Shortcuts can be copied or moved from one computer to another with ease (assuming the original files are also copied/moved).  There’s no need to find the target of the shortcut all over again on the second computer Ditto for linked files (e.g Word documents that link to data in a separate Excel file), playlists, and any files that reference the exact file locations of other files. This applies even to the drive that your files are stored on.  If your files are stored on C: on one computer, make sure they’re stored on C: on all your computers.  Otherwise all your shortcuts, playlists and linked files will stop working! Tip #13.  Create an “Inbox” Folder Create yourself a folder where you store all files that you’re currently working on, or that you haven’t gotten around to filing yet.  You can think of this folder as your “to-do” list.  You can call it “Inbox” (making it the same metaphor as your email system), or “Work”, or “To-Do”, or “Scratch”, or whatever name makes sense to you.  It doesn’t matter what you call it – just make sure you have one! Once you have finished working on a file, you then move it from the “Inbox” to its correct location within your organizational structure. You may want to use your Desktop as this “Inbox” folder.  Rightly or wrongly, most people do.  It’s not a bad place to put such files, but be careful:  If you do decide that your Desktop represents your “to-do” list, then make sure that no other files find their way there.  In other words, make sure that your “Inbox”, wherever it is, Desktop or otherwise, is kept free of junk – stray files that don’t belong there. So where should you put this folder, which, almost by definition, lives outside the structure of the rest of your filing system?  Well, first and foremost, it has to be somewhere handy.  This will be one of your most-visited folders, so convenience is key.  Putting it on the Desktop is a great option – especially if you don’t have any other folders on your Desktop:  the folder then becomes supremely easy to find in Windows Explorer: You would then create shortcuts to this folder in convenient spots all over your computer (“Favorite Links”, “Quick Launch”, etc). Tip #14.  Ensure You have Only One “Inbox” Folder Once you’ve created your “Inbox” folder, don’t use any other folder location as your “to-do list”.  Throw every incoming or created file into the Inbox folder as you create/receive it.  This keeps the rest of your computer pristine and free of randomly created or downloaded junk.  The last thing you want to be doing is checking multiple folders to see all your current tasks and projects.  Gather them all together into one folder. Here are some tips to help ensure you only have one Inbox: Set the default “save” location of all your programs to this folder. Set the default “download” location for your browser to this folder. If this folder is not your desktop (recommended) then also see if you can make a point of not putting “to-do” files on your desktop.  This keeps your desktop uncluttered and Zen-like: (the Inbox folder is in the bottom-right corner) Tip #15.  Be Vigilant about Clearing Your “Inbox” Folder This is one of the keys to staying organized.  If you let your “Inbox” overflow (i.e. allow there to be more than, say, 30 files or folders in there), then you’re probably going to start feeling like you’re overwhelmed:  You’re not keeping up with your to-do list.  Once your Inbox gets beyond a certain point (around 30 files, studies have shown), then you’ll simply start to avoid it.  You may continue to put files in there, but you’ll be scared to look at it, fearing the “out of control” feeling that all overworked, chaotic or just plain disorganized people regularly feel. So, here’s what you can do: Visit your Inbox/to-do folder regularly (at least five times per day). Scan the folder regularly for files that you have completed working on and are ready for filing.  File them immediately. Make it a source of pride to keep the number of files in this folder as small as possible.  If you value peace of mind, then make the emptiness of this folder one of your highest (computer) priorities If you know that a particular file has been in the folder for more than, say, six weeks, then admit that you’re not actually going to get around to processing it, and move it to its final resting place. Tip #16.  File Everything Immediately, and Use Shortcuts for Your Active Projects As soon as you create, receive or download a new file, store it away in its “correct” folder immediately.  Then, whenever you need to work on it (possibly straight away), create a shortcut to it in your “Inbox” (“to-do”) folder or your desktop.  That way, all your files are always in their “correct” locations, yet you still have immediate, convenient access to your current, active files.  When you finish working on a file, simply delete the shortcut. Ideally, your “Inbox” folder – and your Desktop – should contain no actual files or folders.  They should simply contain shortcuts. Tip #17.  Use Directory Symbolic Links (or Junctions) to Maintain One Unified Folder Structure Using this tip, we can get around a potential hiccup that we can run into when creating our organizational structure – the issue of having more than one drive on our computer (C:, D:, etc).  We might have files we need to store on the D: drive for space reasons, and yet want to base our organized folder structure on the C: drive (or vice-versa). Your chosen organizational structure may dictate that all your files must be accessed from the C: drive (for example, the root folder of all your files may be something like C:\Files).  And yet you may still have a D: drive and wish to take advantage of the hundreds of spare Gigabytes that it offers.  Did you know that it’s actually possible to store your files on the D: drive and yet access them as if they were on the C: drive?  And no, we’re not talking about shortcuts here (although the concept is very similar). By using the shell command mklink, you can essentially take a folder that lives on one drive and create an alias for it on a different drive (you can do lots more than that with mklink – for a full rundown on this programs capabilities, see our dedicated article).  These aliases are called directory symbolic links (and used to be known as junctions).  You can think of them as “virtual” folders.  They function exactly like regular folders, except they’re physically located somewhere else. For example, you may decide that your entire D: drive contains your complete organizational file structure, but that you need to reference all those files as if they were on the C: drive, under C:\Files.  If that was the case you could create C:\Files as a directory symbolic link – a link to D:, as follows: mklink /d c:\files d:\ Or it may be that the only files you wish to store on the D: drive are your movie collection.  You could locate all your movie files in the root of your D: drive, and then link it to C:\Files\Media\Movies, as follows: mklink /d c:\files\media\movies d:\ (Needless to say, you must run these commands from a command prompt – click the Start button, type cmd and press Enter) Tip #18. Customize Your Folder Icons This is not strictly speaking an organizational tip, but having unique icons for each folder does allow you to more quickly visually identify which folder is which, and thus saves you time when you’re finding files.  An example is below (from my folder that contains all files downloaded from the Internet): To learn how to change your folder icons, please refer to our dedicated article on the subject. Tip #19.  Tidy Your Start Menu The Windows Start Menu is usually one of the messiest parts of any Windows computer.  Every program you install seems to adopt a completely different approach to placing icons in this menu.  Some simply put a single program icon.  Others create a folder based on the name of the software.  And others create a folder based on the name of the software manufacturer.  It’s chaos, and can make it hard to find the software you want to run. Thankfully we can avoid this chaos with useful operating system features like Quick Launch, the Superbar or pinned start menu items. Even so, it would make a lot of sense to get into the guts of the Start Menu itself and give it a good once-over.  All you really need to decide is how you’re going to organize your applications.  A structure based on the purpose of the application is an obvious candidate.  Below is an example of one such structure: In this structure, Utilities means software whose job it is to keep the computer itself running smoothly (configuration tools, backup software, Zip programs, etc).  Applications refers to any productivity software that doesn’t fit under the headings Multimedia, Graphics, Internet, etc. In case you’re not aware, every icon in your Start Menu is a shortcut and can be manipulated like any other shortcut (copied, moved, deleted, etc). With the Windows Start Menu (all version of Windows), Microsoft has decided that there be two parallel folder structures to store your Start Menu shortcuts.  One for you (the logged-in user of the computer) and one for all users of the computer.  Having two parallel structures can often be redundant:  If you are the only user of the computer, then having two parallel structures is totally redundant.  Even if you have several users that regularly log into the computer, most of your installed software will need to be made available to all users, and should thus be moved out of the “just you” version of the Start Menu and into the “all users” area. To take control of your Start Menu, so you can start organizing it, you’ll need to know how to access the actual folders and shortcut files that make up the Start Menu (both versions of it).  To find these folders and files, click the Start button and then right-click on the All Programs text (Windows XP users should right-click on the Start button itself): The Open option refers to the “just you” version of the Start Menu, while the Open All Users option refers to the “all users” version.  Click on the one you want to organize. A Windows Explorer window then opens with your chosen version of the Start Menu selected.  From there it’s easy.  Double-click on the Programs folder and you’ll see all your folders and shortcuts.  Now you can delete/rename/move until it’s just the way you want it. Note:  When you’re reorganizing your Start Menu, you may want to have two Explorer windows open at the same time – one showing the “just you” version and one showing the “all users” version.  You can drag-and-drop between the windows. Tip #20.  Keep Your Start Menu Tidy Once you have a perfectly organized Start Menu, try to be a little vigilant about keeping it that way.  Every time you install a new piece of software, the icons that get created will almost certainly violate your organizational structure. So to keep your Start Menu pristine and organized, make sure you do the following whenever you install a new piece of software: Check whether the software was installed into the “just you” area of the Start Menu, or the “all users” area, and then move it to the correct area. Remove all the unnecessary icons (like the “Read me” icon, the “Help” icon (you can always open the help from within the software itself when it’s running), the “Uninstall” icon, the link(s)to the manufacturer’s website, etc) Rename the main icon(s) of the software to something brief that makes sense to you.  For example, you might like to rename Microsoft Office Word 2010 to simply Word Move the icon(s) into the correct folder based on your Start Menu organizational structure And don’t forget:  when you uninstall a piece of software, the software’s uninstall routine is no longer going to be able to remove the software’s icon from the Start Menu (because you moved and/or renamed it), so you’ll need to remove that icon manually. Tip #21.  Tidy C:\ The root of your C: drive (C:\) is a common dumping ground for files and folders – both by the users of your computer and by the software that you install on your computer.  It can become a mess. There’s almost no software these days that requires itself to be installed in C:\.  99% of the time it can and should be installed into C:\Program Files.  And as for your own files, well, it’s clear that they can (and almost always should) be stored somewhere else. In an ideal world, your C:\ folder should look like this (on Windows 7): Note that there are some system files and folders in C:\ that are usually and deliberately “hidden” (such as the Windows virtual memory file pagefile.sys, the boot loader file bootmgr, and the System Volume Information folder).  Hiding these files and folders is a good idea, as they need to stay where they are and are almost never needed to be opened or even seen by you, the user.  Hiding them prevents you from accidentally messing with them, and enhances your sense of order and well-being when you look at your C: drive folder. Tip #22.  Tidy Your Desktop The Desktop is probably the most abused part of a Windows computer (from an organization point of view).  It usually serves as a dumping ground for all incoming files, as well as holding icons to oft-used applications, plus some regularly opened files and folders.  It often ends up becoming an uncontrolled mess.  See if you can avoid this.  Here’s why… Application icons (Word, Internet Explorer, etc) are often found on the Desktop, but it’s unlikely that this is the optimum place for them.  The “Quick Launch” bar (or the Superbar in Windows 7) is always visible and so represents a perfect location to put your icons.  You’ll only be able to see the icons on your Desktop when all your programs are minimized.  It might be time to get your application icons off your desktop… You may have decided that the Inbox/To-do folder on your computer (see tip #13, above) should be your Desktop.  If so, then enough said.  Simply be vigilant about clearing it and preventing it from being polluted by junk files (see tip #15, above).  On the other hand, if your Desktop is not acting as your “Inbox” folder, then there’s no reason for it to have any data files or folders on it at all, except perhaps a couple of shortcuts to often-opened files and folders (either ongoing or current projects).  Everything else should be moved to your “Inbox” folder. In an ideal world, it might look like this: Tip #23.  Move Permanent Items on Your Desktop Away from the Top-Left Corner When files/folders are dragged onto your desktop in a Windows Explorer window, or when shortcuts are created on your Desktop from Internet Explorer, those icons are always placed in the top-left corner – or as close as they can get.  If you have other files, folders or shortcuts that you keep on the Desktop permanently, then it’s a good idea to separate these permanent icons from the transient ones, so that you can quickly identify which ones the transients are.  An easy way to do this is to move all your permanent icons to the right-hand side of your Desktop.  That should keep them separated from incoming items. Tip #24.  Synchronize If you have more than one computer, you’ll almost certainly want to share files between them.  If the computers are permanently attached to the same local network, then there’s no need to store multiple copies of any one file or folder – shortcuts will suffice.  However, if the computers are not always on the same network, then you will at some point need to copy files between them.  For files that need to permanently live on both computers, the ideal way to do this is to synchronize the files, as opposed to simply copying them. We only have room here to write a brief summary of synchronization, not a full article.  In short, there are several different types of synchronization: Where the contents of one folder are accessible anywhere, such as with Dropbox Where the contents of any number of folders are accessible anywhere, such as with Windows Live Mesh Where any files or folders from anywhere on your computer are synchronized with exactly one other computer, such as with the Windows “Briefcase”, Microsoft SyncToy, or (much more powerful, yet still free) SyncBack from 2BrightSparks.  This only works when both computers are on the same local network, at least temporarily. A great advantage of synchronization solutions is that once you’ve got it configured the way you want it, then the sync process happens automatically, every time.  Click a button (or schedule it to happen automatically) and all your files are automagically put where they’re supposed to be. If you maintain the same file and folder structure on both computers, then you can also sync files depend upon the correct location of other files, like shortcuts, playlists and office documents that link to other office documents, and the synchronized files still work on the other computer! Tip #25.  Hide Files You Never Need to See If you have your files well organized, you will often be able to tell if a file is out of place just by glancing at the contents of a folder (for example, it should be pretty obvious if you look in a folder that contains all the MP3s from one music CD and see a Word document in there).  This is a good thing – it allows you to determine if there are files out of place with a quick glance.  Yet sometimes there are files in a folder that seem out of place but actually need to be there, such as the “folder art” JPEGs in music folders, and various files in the root of the C: drive.  If such files never need to be opened by you, then a good idea is to simply hide them.  Then, the next time you glance at the folder, you won’t have to remember whether that file was supposed to be there or not, because you won’t see it at all! To hide a file, simply right-click on it and choose Properties: Then simply tick the Hidden tick-box:   Tip #26.  Keep Every Setup File These days most software is downloaded from the Internet.  Whenever you download a piece of software, keep it.  You’ll never know when you need to reinstall the software. Further, keep with it an Internet shortcut that links back to the website where you originally downloaded it, in case you ever need to check for updates. See tip #33 below for a full description of the excellence of organizing your setup files. Tip #27.  Try to Minimize the Number of Folders that Contain Both Files and Sub-folders Some of the folders in your organizational structure will contain only files.  Others will contain only sub-folders.  And you will also have some folders that contain both files and sub-folders.  You will notice slight improvements in how long it takes you to locate a file if you try to avoid this third type of folder.  It’s not always possible, of course – you’ll always have some of these folders, but see if you can avoid it. One way of doing this is to take all the leftover files that didn’t end up getting stored in a sub-folder and create a special “Miscellaneous” or “Other” folder for them. Tip #28.  Starting a Filename with an Underscore Brings it to the Top of a List Further to the previous tip, if you name that “Miscellaneous” or “Other” folder in such a way that its name begins with an underscore “_”, then it will appear at the top of the list of files/folders. The screenshot below is an example of this.  Each folder in the list contains a set of digital photos.  The folder at the top of the list, _Misc, contains random photos that didn’t deserve their own dedicated folder: Tip #29.  Clean Up those CD-ROMs and (shudder!) Floppy Disks Have you got a pile of CD-ROMs stacked on a shelf of your office?  Old photos, or files you archived off onto CD-ROM (or even worse, floppy disks!) because you didn’t have enough disk space at the time?  In the meantime have you upgraded your computer and now have 500 Gigabytes of space you don’t know what to do with?  If so, isn’t it time you tidied up that stack of disks and filed them into your gorgeous new folder structure? So what are you waiting for?  Bite the bullet, copy them all back onto your computer, file them in their appropriate folders, and then back the whole lot up onto a shiny new 1000Gig external hard drive! Useful Folders to Create This next section suggests some useful folders that you might want to create within your folder structure.  I’ve personally found them to be indispensable. The first three are all about convenience – handy folders to create and then put somewhere that you can always access instantly.  For each one, it’s not so important where the actual folder is located, but it’s very important where you put the shortcut(s) to the folder.  You might want to locate the shortcuts: On your Desktop In your “Quick Launch” area (or pinned to your Windows 7 Superbar) In your Windows Explorer “Favorite Links” area Tip #30.  Create an “Inbox” (“To-Do”) Folder This has already been mentioned in depth (see tip #13), but we wanted to reiterate its importance here.  This folder contains all the recently created, received or downloaded files that you have not yet had a chance to file away properly, and it also may contain files that you have yet to process.  In effect, it becomes a sort of “to-do list”.  It doesn’t have to be called “Inbox” – you can call it whatever you want. Tip #31.  Create a Folder where Your Current Projects are Collected Rather than going hunting for them all the time, or dumping them all on your desktop, create a special folder where you put links (or work folders) for each of the projects you’re currently working on. You can locate this folder in your “Inbox” folder, on your desktop, or anywhere at all – just so long as there’s a way of getting to it quickly, such as putting a link to it in Windows Explorer’s “Favorite Links” area: Tip #32.  Create a Folder for Files and Folders that You Regularly Open You will always have a few files that you open regularly, whether it be a spreadsheet of your current accounts, or a favorite playlist.  These are not necessarily “current projects”, rather they’re simply files that you always find yourself opening.  Typically such files would be located on your desktop (or even better, shortcuts to those files).  Why not collect all such shortcuts together and put them in their own special folder? As with the “Current Projects” folder (above), you would want to locate that folder somewhere convenient.  Below is an example of a folder called “Quick links”, with about seven files (shortcuts) in it, that is accessible through the Windows Quick Launch bar: See tip #37 below for a full explanation of the power of the Quick Launch bar. Tip #33.  Create a “Set-ups” Folder A typical computer has dozens of applications installed on it.  For each piece of software, there are often many different pieces of information you need to keep track of, including: The original installation setup file(s).  This can be anything from a simple 100Kb setup.exe file you downloaded from a website, all the way up to a 4Gig ISO file that you copied from a DVD-ROM that you purchased. The home page of the software manufacturer (in case you need to look up something on their support pages, their forum or their online help) The page containing the download link for your actual file (in case you need to re-download it, or download an upgraded version) The serial number Your proof-of-purchase documentation Any other template files, plug-ins, themes, etc that also need to get installed For each piece of software, it’s a great idea to gather all of these files together and put them in a single folder.  The folder can be the name of the software (plus possibly a very brief description of what it’s for – in case you can’t remember what the software does based in its name).  Then you would gather all of these folders together into one place, and call it something like “Software” or “Setups”. If you have enough of these folders (I have several hundred, being a geek, collected over 20 years), then you may want to further categorize them.  My own categorization structure is based on “platform” (operating system): The last seven folders each represents one platform/operating system, while _Operating Systems contains set-up files for installing the operating systems themselves.  _Hardware contains ROMs for hardware I own, such as routers. Within the Windows folder (above), you can see the beginnings of the vast library of software I’ve compiled over the years: An example of a typical application folder looks like this: Tip #34.  Have a “Settings” Folder We all know that our documents are important.  So are our photos and music files.  We save all of these files into folders, and then locate them afterwards and double-click on them to open them.  But there are many files that are important to us that can’t be saved into folders, and then searched for and double-clicked later on.  These files certainly contain important information that we need, but are often created internally by an application, and saved wherever that application feels is appropriate. A good example of this is the “PST” file that Outlook creates for us and uses to store all our emails, contacts, appointments and so forth.  Another example would be the collection of Bookmarks that Firefox stores on your behalf. And yet another example would be the customized settings and configuration files of our all our software.  Granted, most Windows programs store their configuration in the Registry, but there are still many programs that use configuration files to store their settings. Imagine if you lost all of the above files!  And yet, when people are backing up their computers, they typically only back up the files they know about – those that are stored in the “My Documents” folder, etc.  If they had a hard disk failure or their computer was lost or stolen, their backup files would not include some of the most vital files they owned.  Also, when migrating to a new computer, it’s vital to ensure that these files make the journey. It can be a very useful idea to create yourself a folder to store all your “settings” – files that are important to you but which you never actually search for by name and double-click on to open them.  Otherwise, next time you go to set up a new computer just the way you want it, you’ll need to spend hours recreating the configuration of your previous computer! So how to we get our important files into this folder?  Well, we have a few options: Some programs (such as Outlook and its PST files) allow you to place these files wherever you want.  If you delve into the program’s options, you will find a setting somewhere that controls the location of the important settings files (or “personal storage” – PST – when it comes to Outlook) Some programs do not allow you to change such locations in any easy way, but if you get into the Registry, you can sometimes find a registry key that refers to the location of the file(s).  Simply move the file into your Settings folder and adjust the registry key to refer to the new location. Some programs stubbornly refuse to allow their settings files to be placed anywhere other then where they stipulate.  When faced with programs like these, you have three choices:  (1) You can ignore those files, (2) You can copy the files into your Settings folder (let’s face it – settings don’t change very often), or (3) you can use synchronization software, such as the Windows Briefcase, to make synchronized copies of all your files in your Settings folder.  All you then have to do is to remember to run your sync software periodically (perhaps just before you run your backup software!). There are some other things you may decide to locate inside this new “Settings” folder: Exports of registry keys (from the many applications that store their configurations in the Registry).  This is useful for backup purposes or for migrating to a new computer Notes you’ve made about all the specific customizations you have made to a particular piece of software (so that you’ll know how to do it all again on your next computer) Shortcuts to webpages that detail how to tweak certain aspects of your operating system or applications so they are just the way you like them (such as how to remove the words “Shortcut to” from the beginning of newly created shortcuts).  In other words, you’d want to create shortcuts to half the pages on the How-To Geek website! Here’s an example of a “Settings” folder: Windows Features that Help with Organization This section details some of the features of Microsoft Windows that are a boon to anyone hoping to stay optimally organized. Tip #35.  Use the “Favorite Links” Area to Access Oft-Used Folders Once you’ve created your great new filing system, work out which folders you access most regularly, or which serve as great starting points for locating the rest of the files in your folder structure, and then put links to those folders in your “Favorite Links” area of the left-hand side of the Windows Explorer window (simply called “Favorites” in Windows 7):   Some ideas for folders you might want to add there include: Your “Inbox” folder (or whatever you’ve called it) – most important! The base of your filing structure (e.g. C:\Files) A folder containing shortcuts to often-accessed folders on other computers around the network (shown above as Network Folders) A folder containing shortcuts to your current projects (unless that folder is in your “Inbox” folder) Getting folders into this area is very simple – just locate the folder you’re interested in and drag it there! Tip #36.  Customize the Places Bar in the File/Open and File/Save Boxes Consider the screenshot below: The highlighted icons (collectively known as the “Places Bar”) can be customized to refer to any folder location you want, allowing instant access to any part of your organizational structure. Note:  These File/Open and File/Save boxes have been superseded by new versions that use the Windows Vista/Windows 7 “Favorite Links”, but the older versions (shown above) are still used by a surprisingly large number of applications. The easiest way to customize these icons is to use the Group Policy Editor, but not everyone has access to this program.  If you do, open it up and navigate to: User Configuration > Administrative Templates > Windows Components > Windows Explorer > Common Open File Dialog If you don’t have access to the Group Policy Editor, then you’ll need to get into the Registry.  Navigate to: HKEY_CURRENT_USER \ Software \ Microsoft  \ Windows \ CurrentVersion \ Policies \ comdlg32 \ Placesbar It should then be easy to make the desired changes.  Log off and log on again to allow the changes to take effect. Tip #37.  Use the Quick Launch Bar as a Application and File Launcher That Quick Launch bar (to the right of the Start button) is a lot more useful than people give it credit for.  Most people simply have half a dozen icons in it, and use it to start just those programs.  But it can actually be used to instantly access just about anything in your filing system: For complete instructions on how to set this up, visit our dedicated article on this topic. Tip #38.  Put a Shortcut to Windows Explorer into Your Quick Launch Bar This is only necessary in Windows Vista and Windows XP.  The Microsoft boffins finally got wise and added it to the Windows 7 Superbar by default. Windows Explorer – the program used for managing your files and folders – is one of the most useful programs in Windows.  Anyone who considers themselves serious about being organized needs instant access to this program at any time.  A great place to create a shortcut to this program is in the Windows XP and Windows Vista “Quick Launch” bar: To get it there, locate it in your Start Menu (usually under “Accessories”) and then right-drag it down into your Quick Launch bar (and create a copy). Tip #39.  Customize the Starting Folder for Your Windows 7 Explorer Superbar Icon If you’re on Windows 7, your Superbar will include a Windows Explorer icon.  Clicking on the icon will launch Windows Explorer (of course), and will start you off in your “Libraries” folder.  Libraries may be fine as a starting point, but if you have created yourself an “Inbox” folder, then it would probably make more sense to start off in this folder every time you launch Windows Explorer. To change this default/starting folder location, then first right-click the Explorer icon in the Superbar, and then right-click Properties:Then, in Target field of the Windows Explorer Properties box that appears, type %windir%\explorer.exe followed by the path of the folder you wish to start in.  For example: %windir%\explorer.exe C:\Files If that folder happened to be on the Desktop (and called, say, “Inbox”), then you would use the following cleverness: %windir%\explorer.exe shell:desktop\Inbox Then click OK and test it out. Tip #40.  Ummmmm…. No, that’s it.  I can’t think of another one.  That’s all of the tips I can come up with.  I only created this one because 40 is such a nice round number… Case Study – An Organized PC To finish off the article, I have included a few screenshots of my (main) computer (running Vista).  The aim here is twofold: To give you a sense of what it looks like when the above, sometimes abstract, tips are applied to a real-life computer, and To offer some ideas about folders and structure that you may want to steal to use on your own PC. Let’s start with the C: drive itself.  Very minimal.  All my files are contained within C:\Files.  I’ll confine the rest of the case study to this folder: That folder contains the following: Mark: My personal files VC: My business (Virtual Creations, Australia) Others contains files created by friends and family Data contains files from the rest of the world (can be thought of as “public” files, usually downloaded from the Net) Settings is described above in tip #34 The Data folder contains the following sub-folders: Audio:  Radio plays, audio books, podcasts, etc Development:  Programmer and developer resources, sample source code, etc (see below) Humour:  Jokes, funnies (those emails that we all receive) Movies:  Downloaded and ripped movies (all legal, of course!), their scripts, DVD covers, etc. Music:  (see below) Setups:  Installation files for software (explained in full in tip #33) System:  (see below) TV:  Downloaded TV shows Writings:  Books, instruction manuals, etc (see below) The Music folder contains the following sub-folders: Album covers:  JPEG scans Guitar tabs:  Text files of guitar sheet music Lists:  e.g. “Top 1000 songs of all time” Lyrics:  Text files MIDI:  Electronic music files MP3 (representing 99% of the Music folder):  MP3s, either ripped from CDs or downloaded, sorted by artist/album name Music Video:  Video clips Sheet Music:  usually PDFs The Data\Writings folder contains the following sub-folders: (all pretty self-explanatory) The Data\Development folder contains the following sub-folders: Again, all pretty self-explanatory (if you’re a geek) The Data\System folder contains the following sub-folders: These are usually themes, plug-ins and other downloadable program-specific resources. The Mark folder contains the following sub-folders: From Others:  Usually letters that other people (friends, family, etc) have written to me For Others:  Letters and other things I have created for other people Green Book:  None of your business Playlists:  M3U files that I have compiled of my favorite songs (plus one M3U playlist file for every album I own) Writing:  Fiction, philosophy and other musings of mine Mark Docs:  Shortcut to C:\Users\Mark Settings:  Shortcut to C:\Files\Settings\Mark The Others folder contains the following sub-folders: The VC (Virtual Creations, my business – I develop websites) folder contains the following sub-folders: And again, all of those are pretty self-explanatory. Conclusion These tips have saved my sanity and helped keep me a productive geek, but what about you? What tips and tricks do you have to keep your files organized?  Please share them with us in the comments.  Come on, don’t be shy… Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Print or Create a Text File List of the Contents in a Directory the Easy WayCustomize the Windows 7 or Vista Send To MenuAdd Copy To / Move To on Windows 7 or Vista Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Windows CE Chat Transcript (March 30, 2010)

    - by Bruce Eitman
    For those of you who missed the chat today, here is the raw transcript.   By raw, I mean that I copied and pasted the discussion without any edits. This is divided into two parts, the top part is the answers from the Microsoft Experts and the bottom part is the questions from the audience. Answers from Microsoft:   Karel Danihelka [MS] (Expert)[2010-3-30 12:2]: Hi everyone, my name is Karel Danihelka and I am developer in partner response team. Sing Wee [MS] (Expert)[2010-3-30 12:2]: Hi, I'm Sing Wee, part of the CoreOS/BSP Test Team. GLanger_MS (Expert)[2010-3-30 12:2]: Hi, I'm Glen Langer, program manager on the Core Team. Karel Danihelka [MS] (Expert)[2010-3-30 12:3]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? A: Until you are using CPU with GHz frequency your only chance is use interrupt handler and implement all funcionality there. But it will be really tricky and may reduce system performance. If period will be near to millisecond timeframe you can use normal thread wait for event pattern. Karel Danihelka [MS] (Expert)[2010-3-30 12:5]: Q: I want to partition my NAND Flash device. One partition to use for hive ragistry and the other for the apps and data. The only way to do it is programmatically or setting some registry values ? A: It need to be set in registry - generally you need mark this partition as boot partition. Karel Danihelka [MS] (Expert)[2010-3-30 12:7]: Q: My CPU is Intel celeron M processor 1Ghz. A: In this case you can try use normal approach - in interrupt handler return SYSINTR and start thread in device driver which will spin thread waiting on event attached to this SYSINTR. Karel Danihelka [MS] (Expert)[2010-3-30 12:7]: Q: If i need to implement it using interrupt handlers, What are all the files that I should look at? A: Good quesiton - I would recommend documentation and there was BSP development book to download for free. mikehall_ms (Moderator)[2010-3-30 12:8]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? A: Using product support is the formal way to report bugs/issues - Product support can then create an issue that can be tracked. Karel Danihelka [MS] (Expert)[2010-3-30 12:9]: Q: But the operation for creating the partitions ? A: This is tricky - if you will make it autopartition & autoformat it will be created by filesystem. But generally it depends on your boot loader. mikehall_ms (Moderator)[2010-3-30 12:10]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: At MIX 2010 Charlie Kindel presented a session that described some of the core technologies that make up Windows Phone 7 Series, including the underlying operating system (Windows CE) and the new ISV programming model based on Silverlight and .NET - check out the Mix Online Videos to get more information. davbo-msft (Moderator)[2010-3-30 12:10]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: This forum is to discussed released products in the industry. Windows Mobile & Windows CE are based on the same Windows CE Kernel/system. Windows CE is focused on deliverying the OS for embedded customers in the market where Windows Mobile is focused on deliverying compelling Windows Phone platform. davbo-msft (Moderator)[2010-3-30 12:11]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? A: http://en.wikipedia.org/wiki/Windows_CE Wikipedia gives a good breakdown of the version history. Travis Hobrla [MS] (Expert)[2010-3-30 12:13]: Q: I created a OS design with KITL and kernel debugger enabled. But I am unable to connect to the target for debugging. I am getting the following error when i try to connect with the device. PB Debugger Cannot initialize the Kernel Debugger. PB Debugger Debugger could not initialize connection. PB Debugger The Kernel Debugger is waiting to connect with target. PB Debugger The Kernel Debugger has been disconnected successfully. A: One possibility is that a rogue cesvchost.exe has co-opted the debugger. I am assuming this is CE 6.0? Can you try exiting visual studio and manually killing the cesvchost.exe process from the Task Manager? davbo-msft (Moderator)[2010-3-30 12:14]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? A: For info on contacting Microsoft support refer to the support page on the Embedded website: http://msdn.microsoft.com/en-us/windowsembedded/dd897633.aspx Sing Wee [MS] (Expert)[2010-3-30 12:16]: Q: Do u mean ISR/IST implementation? How can i register an interrupt? What kind of interrupt should i register? A: A good introduction to interrupts in WinCE 6.0 can be found here (aside from the documentation on MSDN): http://download.microsoft.com/download/9/c/f/9cffaa58-4000-48d6-a4b2-5fed9e4e6410/Chapter%206%20-%20Developing%20Device%20Drivers.pdf mikehall_ms (Moderator)[2010-3-30 12:16]: Q: What will be different in Windows Compact 7 from CE 6.0? A: Unfortunately we cannot discuss unreleased products on this chat - keep an eye on the Windows Embedded web site and blogs to keep up to date with product announcements. Travis Hobrla [MS] (Expert)[2010-3-30 12:16]: Q: I am using CE 6.0. There is no cesvchost process running in my system. A: What operating system are you using? Karel Danihelka [MS] (Expert)[2010-3-30 12:16]: Q: So...I have to modify file system code to create 2 partition at system startup ?!! I haven't understood.... A: You don't need to modify code, there are registry settings to achive this (look to documentation). But you may need to create partition table in boot loader. Unfortunatelly there isn't simple way how to do it. davbo-msft (Moderator)[2010-3-30 12:18]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Windows Embedded CE 6.0 R3 includes Sliverlight for Windows Embedded. Refer to New Features overview on the embedded web site. http://www.microsoft.com/windowsembedded/en-us/products/windowsce/default.mspx. Silverlight - The power of Silverlight brought to Windows Embedded CE to create rich applications and user interfaces is new part of Windows CE Embedded. mikehall_ms (Moderator)[2010-3-30 12:20]: Q: The link for developing device drivers is not working. can u please check that? A: http://msdn.microsoft.com/en-us/library/ms923714.aspx davbo-msft (Moderator)[2010-3-30 12:20]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Sorry misunderstood the question I thought you were asking if embedded CE could handle Silverlight. Please repost so that the question goes back into the active queue because once answered no way to put the status back to open. Travis Hobrla [MS] (Expert)[2010-3-30 12:21]: Q: sorry! Windows XP SP3 A: Can you try exiting VS2005 and confirming cesvchost.exe is not running, then renaming C:\Documents and Settings\USERNAME\Local Settings\Application Data\Microsoft\CoreCon\1.0 to 1.0_backup, then restarting VS2005? Sing Wee [MS] (Expert)[2010-3-30 12:24]: Q: Can I have the book's name please? A: I believe the downloadable version is related to the last link I sent. If you go to the following website, I believe you can download the whole thing: http://msdn.microsoft.com/en-us/windowsembedded/ce/cc294468.aspx davbo-msft (Moderator)[2010-3-30 12:24]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? A: change the URI of the image or use a writeable bitmap if they want to manually toggle the pixels Sing Wee [MS] (Expert)[2010-3-30 12:25]: A: Whoops, hit [ENTER] too early. On the right side, you'll see there an "Exam Preparation Kit" link that can be downloaded in several different languages. Sing Wee [MS] (Expert)[2010-3-30 12:25]: Q: Can I have the book's name please? A: Whoops, hit [ENTER] too early. On the right side, you'll see there an "Exam Preparation Kit" link that can be downloaded in several different languages. Karel Danihelka [MS] (Expert)[2010-3-30 12:26]: Q: I have a NAND Flash on my target device. On this flash I have the hive registry and an application.I have observed that when the NAND flash is fully, the system startup time is longer....is there a degradation of NAND use that influences the startup time ? Why ? A: Yes - on boot flash abstraction library (old one) read metadata from all sectors to rebuild physical - logical mapping table. davbo-msft (Moderator)[2010-3-30 12:27]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Need addition info on this question. Can you provide more details on what you are trying to do in Silverlight? davbo-msft (Moderator)[2010-3-30 12:27]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? A: Additonal Info: if you want to manually touch the pixels use WriteableBitmap if you want to use the underlying HWND then use IXRVisualHost::GetHWND() davbo-msft (Moderator)[2010-3-30 12:28]: Q: Writable bitmap, is there an example of the syntax? A: if you want to manually touch the pixels use WriteableBitmap if you want to use the underlying HWND then use IXRVisualHost::GetHWND() davbo-msft (Moderator)[2010-3-30 12:29]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? A: Can I get more information about this question about what you are trying to accomplish in Silverlight? davbo-msft (Moderator)[2010-3-30 12:31]: Q: IXRVisualHost::GetHWND() exactly what I needed Thanks, A: Your welcome Sing Wee [MS] (Expert)[2010-3-30 12:31]: Q: ok. thanks for the book's link A: No problem. Travis Hobrla [MS] (Expert)[2010-3-30 12:32]: Q: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. A: There are a couple workarounds I can think of. I believe the Module element is only used when doing SYSGEN parsing to make sure dependent SYSGENs are present when the item is selected, so I believe it is optional to the catalog. The other obvious workaround is to shorten the soc name. I realize neither of these solutions is ideal. This is not something we anticipated when we tested CE6.0, sorry. Travis Hobrla [MS] (Expert)[2010-3-30 12:33]: Q: I am getting this error only when I select the KdStub as the debugger in Target device connectivity. A: Right, but KdStub is the debugger that you should use. Have you tried the steps I suggested? Travis Hobrla [MS] (Expert)[2010-3-30 12:36]: Q: If I select Active KTIL, My OS doesn't boots. It says "loading NK.EXE at 0x<xxxxx> location" after that nothing comes in the debug log. A: Can you look at the serial debug output and see what is happening there? Often it can give you a clue to the KITL driver malfunctioning. Travis Hobrla [MS] (Expert)[2010-3-30 12:38]: Q: I have tried that and I am getting the same error. A: I am assuming you have a device created in Target -> Connectivity Options in Platform Builder. What are the Kernel download / Kernel transport for your device? Travis Hobrla [MS] (Expert)[2010-3-30 12:40]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug A: This looks reasonable and does not give clues as to why boot would halt at that point. If you capture a network trace or turn on KITL debug zones via dpCurSettings in kitl.dll, do you see KITL active after this? Travis Hobrla [MS] (Expert)[2010-3-30 12:41]: Q: Both is happening via Ethernet. A: Only thing I have left to suggest is a Platform Builder installation Repair, then. Karel Danihelka [MS] (Expert)[2010-3-30 12:42]: Q: Hi, I saw that the ATADISK is quite generic and des not have any optimizations. Do you have any advice to consider while tryin to improve the performance of it? A: If I remember correctly sample code has support for some specific hardware controllers (little obsolete now). This should be good start point (if you will not decide take existing driver as sample and write you own). Travis Hobrla [MS] (Expert)[2010-3-30 12:44]: Q: I didn't do that. I have to try. A: I think that's the next valid step. You need to figure out whether KITL is hanging or the device - use instrumented serial debug messages and network trace to determine this. Sing Wee [MS] (Expert)[2010-3-30 12:46]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug A: Neo, have you by any chance tried looking into your firewall to see if it might be blocking traffic on any particular ports? Wireshark/netmon might be able to help you here if that's the issue. davbo-msft (Moderator)[2010-3-30 12:48]: Q: I lost spell check, how can i get it back A: Hello - can you give additional details about your question? Is this related to a Windows CE Embedded application? masatos_MSFT (Expert)[2010-3-30 12:51]: Q: When attempting to run the CETK cellcore tests the documentation states the pre-requisites include "stinger.ini", "ltk.ini" but windows CE doesn't provide these or document what they fully need to contain. Implicitly you also need "datatrans.xml" which isn't supplied. If you get around this error and steal these from Windows Mobile instead, when you try and run the CETK tests you get a data abort in radiometricsdll.dll. How should we invoke the cellcore parts of CETK? A: Hi Pev, what version of Windows CE and CETK are you using? I do not have the expertise to answer this question, but can find somebody who can. Travis Hobrla [MS] (Expert)[2010-3-30 12:52]: Q: I don't see a kitlcore.dll in my OS. is my debug image fails to load because of that? A: kitl.dll should be all that's needed, kitlcore.lib is linked into that. Travis Hobrla [MS] (Expert)[2010-3-30 12:55]: Q: I've got a platform (not developed by myself) where I2C bus support has to be provided through the OAL as the kernel needs to talk to devices such as the power management IC and gas gauge so a 'proper' I2C driver hanging off device manager isn't possible. This happens to be a polled driver, so obviously it hits the system hard when either under lots of traffic or an error condition occurs and the driver constantly polls. I originally thought that there was no straightforward way to make such code interrupt driven in the kernel (as it's a cludge) but I realised that that's exactly what ETHDBG drivers do. Is there any reason why I shouldn't have a go at implementing a similar mechanism for our kernel resident I2C driver? If not, are there any obvious pitfalls - I've not seen any other BSP's do this in the past... A: You can make a 'proper' driver that calls down into the OAL to do the actual I2C transactions. Alternatively you can build an interrupt-based version in the OAL where you handle everything in the ISR. There is nothing wrong with that so long as the rest of your drivers and app threads can handle longer times with interrupts off while you are servicing I2C interrupts. Sing Wee [MS] (Expert)[2010-3-30 12:55]: Q: I am having trouble with my mouse, I have the microsoft wireless mobile mouse 3000, when I push the scroll button I am suppose to have autoscroll instead it shows other web pages,Can you help me out tell me what to do!!! A: Sorry, this current chat is about Windows Embedded Compact. Hope you're able to find an answer to your question elsewhere. davbo-msft (Moderator)[2010-3-30 12:56]: Q: Is the Silverlight Animation "Spline" a BezierSpline? A: Spline - http://msdn.microsoft.com/en-us/library/ee501495.aspx<BR< a>>   masatos_MSFT (Expert)[2010-3-30 12:57]: Thanks for the info Pev. I will follow up with the CETK experts here and get back to you. davbo-msft (Moderator)[2010-3-30 12:59]: Q: Spline- bad link A: http://msdn.microsoft.com/en-us/library/ee501495.aspx davbo-msft (Moderator)[2010-3-30 13:0]: Q: Sorry, got to tirm the ">" A: No Worries http://msdn.microsoft.com/en-us/library/ee501495.aspx davbo-msft (Moderator)[2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; davbo-msft (Moderator)[2010-3-30 13:1]: Q: hi everybody. I would like to know if there is something know about a bug in RTC API (VOIP), especially when using SIP. According the to the analysis with application verifyier there is a heap link in rtcdllmedia.dll. All of the unreleased chunks seem to have a size of 6560 bytes. A: I will follow up with the Networking Team for a response. davbo-msft (Moderator)[2010-3-30 13:1]: Q: Hi, we've problems with debugging of applications (= breakpoints in Platform Builder will be ignored) over KITL on Windows CE 5.0, if the PDB files are large (over 60MB). Are there any limitations to size of the PDB files? A: I will follow up with the tools team for a response and post with the transcript. Sing Wee [MS] (Expert)[2010-3-30 13:1]: Q: I am unable to use the target control in my development environment. any ideas? A: Make sure you have SYSGEN_SHELL=1 set in your build environment. davbo-msft (Moderator)[2010-3-30 13:3]: Q: what are the main differences between Object Store and RAM disk ? They are both in RAM...are there performance differences ? access differences ? A: I will follow up with the Core Team and get a response posted with the transcript to MSDN  The Questions   [2010-3-30 12:57]: Thanks for the info Pev. I will follow up with the CETK experts here and get back to you. [2010-3-30 12:59]:   [2010-3-30 13:0]:   [2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; [2010-3-30 13:1]: [2010-3-30 13:1]: [2010-3-30 13:1]: [2010-3-30 13:3]: neo (Guest)[2010-3-30 11:37]: Hi all KellyG (Guest)[2010-3-30 11:37]: Hi KellyG (Guest)[2010-3-30 11:37]: I have a question unrelated to windows Ce embedded, can you please help me?? neo (Guest)[2010-3-30 11:38]: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals! c neo (Guest)[2010-3-30 11:38]: yes. post it. May be i cud give a try KellyG (Guest)[2010-3-30 11:38]: My Product key listed on my tower is not the product key I need for microsoft office, but that is the only product key listed. neo (Guest)[2010-3-30 11:39]: I hope this is a chat for windows embedded. please post ur queries in office forums KellyG (Guest)[2010-3-30 11:39]: it is but i could not find a forum for office neo (Guest)[2010-3-30 11:40]: I think moderators will help u out. @ davbo-msft: can u help this guy? neo (Guest)[2010-3-30 11:41]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? davbo-msft (Moderator)[2010-3-30 11:50]: Our chat today covers the topic of Windows Embedded CE! 1. This chat will last for one hour. During this hour, our Experts will respond to as many questions as they can. Please understand that there may be some questions we cannot respond to due to lack of information or because the information is not yet public. 2. We encourage you to submit questions for our Experts. To do so, type your questions in the send box, select the “ask the Experts” box and click SEND. Questions sent directly to the Guest Chat room will not be answered by the Experts, but we encourage other community members to assist. 3. We ask that you stay on topic for the duration of the chat. This helps the Guests and Experts follow the conversation more easily. We invite you to ask off topic questions after this chat is over, but not during. 4. Please abide by the Chat Code of Conduct. Chat code of conduct: <http://msdn.microsoft.com/chats/chatroom.aspx?ctl=hlp#Conduct>; Pev (Guest)[2010-3-30 11:54]: Evening! davbo-msft (Moderator)[2010-3-30 11:54]: Hello everyone this is Dave Boyce - I worked in the Multimedia area for Windows CE. neo (Guest)[2010-3-30 11:55]: hello dave neo (Guest)[2010-3-30 11:55]: The chat code of conduct link is not working! Pev (Guest)[2010-3-30 11:56]: Best be polite just in case then ;-) neo (Guest)[2010-3-30 11:56]: davbo-msft (Moderator)[2010-3-30 11:57]: I'll check out the issue w/ the link paolopat (Guest)[2010-3-30 12:0]: Hello davbo-msft (Moderator)[2010-3-30 12:0]: We are pleased to welcome our Experts for today’s chat. I will have them introduce themselves now. Chat will begin in a couple of minutes. <http://www.Microsoft.com/Embedded>; paolopat (Guest)[2010-3-30 12:3]: Hello Experts ! neo (Guest)[2010-3-30 12:3]: Welcome all! paolopat (Guest)[2010-3-30 12:3]: Q: I want to partition my NAND Flash device. One partition to use for hive ragistry and the other for the apps and data. The only way to do it is programmatically or setting some registry values ? neo (Guest)[2010-3-30 12:3]: Q: I need to implement hardware timers on my windows CE 6.0 device to trigger events at microsecond intervals. Where should i start? neo (Guest)[2010-3-30 12:5]: Q: My CPU is Intel celeron M processor 1Ghz. Pev (Guest)[2010-3-30 12:5]: neo: if your silicon has multiple general purpose timers, pick one that's not in use for the system timer / profiler and set it up to trigger irqs for your purpose. You can't guarantee hard realtime type responses though... GarySwalling (Guest)[2010-3-30 12:5]: Q: Is Windows Phone 7 related to Windows CE? If so, can you tell me what version of Windows CE is the basis? Is it in fact the new version of Windows Mobile? Pev (Guest)[2010-3-30 12:6]: Q: Hi guys, what's the formal way to report bugs back to the core team / product team? The mechanism of calling the support phone number every time is really onerous and time-consuming. Is there another mechanism? paolopat (Guest)[2010-3-30 12:6]: Q: But the operation for creating the partitions ? neo (Guest)[2010-3-30 12:6]: Q: If i need to implement it using interrupt handlers, What are all the files that I should look at? GPM (Guest)[2010-3-30 12:6]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? Jhony (Guest)[2010-3-30 12:7]: Q: I created a OS design with KITL and kernel debugger enabled. But I am unable to connect to the target for debugging. I am getting the following error when i try to connect with the device. PB Debugger Cannot initialize the Kernel Debugger. PB Debugger Debugger could not initialize connection. PB Debugger The Kernel Debugger is waiting to connect with target. PB Debugger The Kernel Debugger has been disconnected successfully. Charles (Guest)[2010-3-30 12:7]: What will be different in Windows Compact 7 from CE 6.0? neo (Guest)[2010-3-30 12:8]: Can I have the book's name please? kiefs_dev (Guest)[2010-3-30 12:8]: Q: hi everybody. I would like to know if there is something know about a bug in RTC API (VOIP), especially when using SIP. According the to the analysis with application verifyier there is a heap link in rtcdllmedia.dll. All of the unreleased chunks seem to have a size of 6560 bytes. paolopat (Guest)[2010-3-30 12:10]: Q: So...I have to modify file system code to create 2 partition at system startup ?!! I haven't understood.... neo (Guest)[2010-3-30 12:10]: Q: Do u mean ISR/IST implementation? How can i register an interrupt? What kind of interrupt should i register? neo (Guest)[2010-3-30 12:11]: Q: Can I have the book's name please? Charles (Guest)[2010-3-30 12:11]: Q: What will be different in Windows Compact 7 from CE 6.0? PaulT (Guest)[2010-3-30 12:11]: neo: I'd say that you really need the docs for YOUR BSP, not generic documents for BSPs in general. Each BSP may be architected differently. If you're using the CEPC BSP, then the documentation that comes with Platform Builder is a reasonable place to look. GPM (Guest)[2010-3-30 12:11]: Q: If one has an image on a Silverlight page, it seems to be cached. How would one refresh that cache after changing the underlying image? Elektrobit (Guest)[2010-3-30 12:12]: Q: Hi, we've problems with debugging of applications (= breakpoints in Platform Builder will be ignored) over KITL on Windows CE 5.0, if the PDB files are large (over 60MB). Are there any limitations to size of the PDB files? Jhony (Guest)[2010-3-30 12:15]: Q: I am using CE 6.0. There is no cesvchost process running in my system. alexquisi (Guest)[2010-3-30 12:15]: Q: Hi, I saw that the ATADISK is quite generic and des not have any optimizations. Do you have any advice to consider while tryin to improve the performance of it? Jhony (Guest)[2010-3-30 12:17]: Windows XP service pack 1 Jhony (Guest)[2010-3-30 12:17]: Q: sorry! Windows XP SP3 neo (Guest)[2010-3-30 12:19]: Q: The link for developing device drivers is not working. can u please check that? paolopat (Guest)[2010-3-30 12:20]: Q: I have a NAND Flash on my target device. On this flash I have the hive registry and an application.I have observed that when the NAND flash is fully, the system startup time is longer....is there a degradation of NAND use that influences the startup time ? Why ? Pev (Guest)[2010-3-30 12:20]: Q: When attempting to run the CETK cellcore tests the documentation states the pre-requisites include "stinger.ini", "ltk.ini" but windows CE doesn't provide these or document what they fully need to contain. Implicitly you also need "datatrans.xml" which isn't supplied. If you get around this error and steal these from Windows Mobile instead, when you try and run the CETK tests you get a data abort in radiometricsdll.dll. How should we invoke the cellcore parts of CETK? Pev (Guest)[2010-3-30 12:21]: Hi all, Pev (Guest)[2010-3-30 12:21]: oops Pev (Guest)[2010-3-30 12:21]: :-D Pev (Guest)[2010-3-30 12:24]: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Pev (Guest)[2010-3-30 12:25]: Q: Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. GPM (Guest)[2010-3-30 12:25]: Q: Writable bitmap, is there an example of the syntax? Pev (Guest)[2010-3-30 12:25]: Q: Typically for SoC devices you name your hardware specific libraries in the form "SOCDIRNAME_LIBNAME". In our platform "OMAP35XX_TPS659XX_TI_V1" if you do this we cause the catalog parser to die... For example if we have a library "Musbfn_OMAP35XX_TPS659XX_TI_V1.dll" entering this in the catalogs pbcxml file in a <module> section causes the XML parser to fail with : Error 3 The 'urn:Microsoft.PlatformBuilder/Catalog:Module' element is        invalid - The value '012345678901234567890123456789.dll' is invalid         according to its datatype         'urn:Microsoft.PlatformBuilder/Catalog:CatalogFileName' - The actual         length is greater than the MaxLength value. Pev (Guest)[2010-3-30 12:25]: sorry, messed up submission there! GPM (Guest)[2010-3-30 12:26]: Q: I would like to get a handle to a Silverlight screen section, is that possiable? GPM (Guest)[2010-3-30 12:28]: Q: IXRVisualHost::GetHWND() exactly what I needed Thanks, PaulT (Guest)[2010-3-30 12:29]: GPM: You don't have to keep submitting the questions. The chat experts have an application that they're using to follow the chat and all Ask the Experts questions are logged. Jhony (Guest)[2010-3-30 12:29]: Q: I am getting this error only when I select the KdStub as the debugger in Target device connectivity. neo (Guest)[2010-3-30 12:30]: Q: ok. thanks for the book's link Pev (Guest)[2010-3-30 12:31]: Hm, did those two I submitted get picked up by anyone? neo (Guest)[2010-3-30 12:33]: Q: If I select Active KTIL, My OS doesn't boots. It says "loading NK.EXE at 0x<xxxxx> location" after that nothing comes in the debug log. PaulT (Guest)[2010-3-30 12:34]: Pev: PaulT (Guest)[2010-3-30 12:35]: Pev: I'm sure they did. The guys who are actually on the chat may not be experts in that part of things. That's usually the explanation when you don't get an answer in 10 minutes or so. Pev (Guest)[2010-3-30 12:36]: Ah, fair enough Susie (Guest)[2010-3-30 12:36]: My Outlook Express incoming mail is corrput. No ONE has been able to fix the problem, Dell or Norton. I have dial up I'm in a rural area Jhony (Guest)[2010-3-30 12:37]: Q: I have tried that and I am getting the same error. Pev (Guest)[2010-3-30 12:37]: Susie : Use Thunderbird instead :-D PaulT (Guest)[2010-3-30 12:37]: Susie: Sorry, but this chat is not about Windows, but Embedded (like what runs on a phone). Your best chance is to find a local expert or talk to your ISP. paolopat (Guest)[2010-3-30 12:37]: Q: what are the main differences between Object Store and RAM disk ? They are both in RAM...are there performance differences ? access differences ? Susie (Guest)[2010-3-30 12:38]: My computer knowlege is very limited, what is Thunderbird? Pev (Guest)[2010-3-30 12:38]: A different email client :-D neo (Guest)[2010-3-30 12:39]: Q: KITL: *** Device Name CEPC56059 *** WARN: KITL will run in polling mode VBridge:: built on [Jul 10 2009] time [10:20:14] VBridgeInit()...TX = [16384] bytes -- Rx = [16384] bytes Tx buffer [0xA1B84860] to [0xA1B88860]. Rx buffer [0xA1B88880] to [0xA1B8C880]. VBridge:: NK add MAC: [0-60-65-2-DA-FB] Connecting to Desktop KITL: Connected host IP: 1 Port: 1086 .. this is the output of the serial debug Susie (Guest)[2010-3-30 12:39]: Do I need to uninstall Outlook Express youngboyzie (Guest)[2010-3-30 12:39]: I need to start battery calibration for my new battery for my dell inspiron 1525 laptop and should be able to reach the BIOS screen by hitting f2 but this isnt working... help? Pev (Guest)[2010-3-30 12:39]: Nah, you can run it instead - you'll still need help from your ISP to configure it I expect Jhony (Guest)[2010-3-30 12:39]: Q: Both is happening via Ethernet. PaulT (Guest)[2010-3-30 12:40]: youngboyzie: You're off-topic. This is not a general chat for Windows and certainly not for Dell. You'll have to ask Dell how to get to setup; it's their machine. bill (Guest)[2010-3-30 12:41]: I lost spell check, how can i get et back neo (Guest)[2010-3-30 12:42]: Q: I didn't do that. I have to try. Jhony (Guest)[2010-3-30 12:43]: Q: Ok. I will do it then. bill (Guest)[2010-3-30 12:43]: Q: I lost spell check, how can i get it back PaulT (Guest)[2010-3-30 12:44]: bill: This isn't a general Windows chat. There are some Web forums that you might try. GarySwalling (Guest)[2010-3-30 12:45]: Q: Thanks, I found the Phone 7 presentation at http://live.visitmix.com/MIX10/Sessions/CL13 GPM (Guest)[2010-3-30 12:45]: Q: Is the Silverlight Animation "Spline" a BezierSpline? neo (Guest)[2010-3-30 12:46]: Q: ok. I'll do it. thanks Pev (Guest)[2010-3-30 12:46]: Whowever was asking about KITL connection : I've had this loads in the past. I think I started debugging last time by using wireshark to see what was happening on the network then setting up the OAL_ETHER and OAL_FUNC and OAL_VERBOSE as well as OAL_KITL flags to see what was actually happening in the driver.... Pev (Guest)[2010-3-30 12:47]: I'd generally make sure that you're testing though a 10baseT hub (instead of anything faster) and forcing Active KITL in polled mode too... neo (Guest)[2010-3-30 12:48]: Q: I disabled the firewall in my PC. Pev (Guest)[2010-3-30 12:50]: Q: I've got a platform (not developed by myself) where I2C bus support has to be provided through the OAL as the kernel needs to talk to devices such as the power management IC and gas gauge so a 'proper' I2C driver hanging off device manager isn't possible. This happens to be a polled driver, so obviously it hits the system hard when either under lots of traffic or an error condition occurs and the driver constantly polls. I originally thought that there was no straightforward way to make such code interrupt driven in the kernel (as it's a cludge) but I realised that that's exactly what ETHDBG drivers do. Is there any reason why I shouldn't have a go at implementing a similar mechanism for our kernel resident I2C driver? If not, are there any obvious pitfalls - I've not seen any other BSP's do this in the past... neo (Guest)[2010-3-30 12:51]: Q: I don't see a kitlcore.dll in my OS. is my debug image fails to load because of that? Pev (Guest)[2010-3-30 12:52]: Q: Hi masatos, I'm using Windows Embedded CE 6.0 with R3 and patched to feb 2010's QFE's (with it's associated CETK version) this is a machine with only CE 6.0 on (no conflicts with earlier CE or WM...) neo (Guest)[2010-3-30 12:54]: ok. Got it neo (Guest)[2010-3-30 12:54]: Q: ok. Got it Roundman (Guest)[2010-3-30 12:55]: Q: I am having trouble with my mouse, I have the microsoft wireless mobile mouse 3000, when I push the scroll button I am suppose to have autoscroll instead it shows other web pages,Can you help me out tell me what to do!!! Pev (Guest)[2010-3-30 12:55]: Hey neo, debugging kitl issues is really frustrating but dont lose heart :-) neo (Guest)[2010-3-30 12:55]: @ pev : u fixed the problem of KITL after that? neo (Guest)[2010-3-30 12:56]: I am getting the same error again and again. I even cleaned my environment and tried in a fresh PC. But didn't succeed yet Pev (Guest)[2010-3-30 12:56]: Well, eventually - my experiences probably won't help you as different platforms have different reasons for doing that neo (Guest)[2010-3-30 12:56]: I think so PaulT (Guest)[2010-3-30 12:58]: neo: Have you searched the old messages in microsoft.public.windowsce.platbuilder? It seems to me that there was a packet size situation where it was possible to have problems with KITL connections based on a setting on the PC. Google Groups, groups.google.com, Advanced Groups Search will allow you to search a single newsgroup or a set of newsgroups easily. GPM (Guest)[2010-3-30 12:58]: Q: Spline- bad link PaulT (Guest)[2010-3-30 12:58]: GPM: without the > at the end does it work? It seems to for me... neo (Guest)[2010-3-30 12:58]: But some times if i try to connect to the device again. The Image information is seen in the serial debug. what does that mean?Download BIN file information: ----------------------------------------------------- [0]: Base Address=0x220000 Length=0x18DAADC Received a broadcast message !CheckUDP: Not UDP (proto = 0x00000001) after this i am getting the old errors. PB debugger cannot initialize ... GPM (Guest)[2010-3-30 12:59]: Q: Sorry, got to tirm the ">" neo (Guest)[2010-3-30 12:59]: ok paul. I ll look into that. davbo-msft (Moderator)[2010-3-30 13:0]: Hello everyone, we are just about out of time. Thank you for joining us for our Windows Embedded CE 6.0 chat today! <http://www.Microsoft.com/Embedded>; A special thank you to the product group members for coming out. The transcript of today’s chat will be posted online as soon as possible, to <http://msdn.microsoft.com/en-us/chats>;. We’ll see you again for another chat next month. Please check <http://msdn.microsoft.com/en-us/chats>; for the list of upcoming chats. If you still have unanswered questions, let me suggest that you post them on one of our newsgroups on <http://msdn.microsoft.com/en-us/windowsembedded/ce/default.aspx> -Windows Embedded CE 6.0 R3 Now Available! <http://msdn.microsoft.com/windowsembedded/ce/dd630616.aspx>; neo (Guest)[2010-3-30 13:1]: Q: I am unable to use the target control in my development environment. any ideas? Pev (Guest)[2010-3-30 13:1]: Sure, if KITL isn't connected target control won't work as it runs over kitl... neo (Guest)[2010-3-30 13:1]: ok .thanks pev neo (Guest)[2010-3-30 13:2]: yes. sysgen_shell is set to 1 neo (Guest)[2010-3-30 13:2]: Q: yes. sysgen_shell is set to 1 Marcelovk (Guest)[2010-3-30 13:2]: Q: Is there any way to extract the default command lines of the tests in CETK? I want to have it running unconnected from the desktop.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • WCF AuthenticationService in IIS7 Error

    - by germandb
    I have a WCF Server running on IIS 7 using default application pool, with SSL activate, the services is installed in a SBS Server 2008. I implement client application services with wcf and SQL 2005 for setting the access control in my application. The application run under windows vista and is make with WPF. In my developer machine the application and the WCF services run well, the IIS i'm use for the trials is the local IIS 7 and the database is the SQL Server 2005 database hosting in my server. I'm using Visual Studio Project Designer to enable and configure client application services. using https://localhost/WcfServidorFundacion. When i'm change the authentication services location to https://WcfServices:5659/WcfServidorFundacion and recompile the application, the following error show up. Message: The web service returned the error status code: InternalServerError. Details of service failure: {"Message":" Error while processing your request ","StackTrace":"","ExceptionType":""} Stack Trace: en System.Net.HttpWebRequest.GetResponse() en System.Web.ClientServices.Providers.ProxyHelper.CreateWebRequestAndGetResponse(String serverUri, CookieContainer& cookies, String username, String connectionString, String connectionStringProvider, String[] paramNames, Object[] paramValues, Type returnType) InnerException: System.Net.WebException Message="Remote Server Error: (500) Interal Server Error." I can access the WCF service from the navigator using the url mentioned above and even make a webReference in my project. I make a capture of the response but I'cant post it because i don't have 10 reputation points I activate the error log in the IIS 7 server, and the result is a Warning in the ManagedPipilineHandler. I appreciate if any one can help me Errors & Warnings No.? Severity Event Module Name 132. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName ManagedPipelineHandler Notification 128 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode La operación se ha completado correctamente. (0x0) Maybe this can help, is the web.config of my service <?xml version="1.0" encoding="utf-8"?> <!-- Nota: como alternativa para editar manualmente este archivo, puede utilizar la herramienta Administración de sitios web para configurar los valores de la aplicación. Utilice la opción Sitio Web->Configuración de Asp.Net en Visual Studio. Encontrará una lista completa de valores de configuración y comentarios en machine.config.comments, que se encuentra generalmente en \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings /> <connectionStrings> <remove name="LocalMySqlServer" /> <remove name="LocalSqlServer" /> <add name="fundacionSelfAut" connectionString="Data Source=FUNDACIONSERVER/PRUEBAS;Initial Catalog=fundacion;User ID=wcfBaseDatos;Password=qwerty_2009;" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <profile enabled="true" defaultProvider="SqlProfileProvider"> <providers> <clear /> <add name="SqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="fundacionSelfAut" applicationName="fundafe" /> </providers> <properties> <add name="FirstName" type="String" /> <add name="LastName" type="String" /> <add name="PhoneNumber" type="String" /> </properties> </profile> <roleManager enabled="true" defaultProvider="SqlRoleProvider"> <providers> <clear /> <add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="fundacionSelfAut" applicationName="fundafe" /> </providers> </roleManager> <membership defaultProvider="SqlMembershipProvider"> <providers> <clear /> <add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="fundacionSelfAut" applicationName="fundafe" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="true" requiresUniqueEmail="true" passwordFormat="Hashed" /> </providers> </membership> <authentication mode="Forms" /> <compilation debug="true" strict="false" explicit="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <!-- La sección <authentication> permite la configuración del modo de autenticación de seguridad utilizado por ASP.NET para identificar a un usuario entrante. --> <!-- La sección <customErrors> permite configurar las acciones que se deben llevar a cabo/cuando un error no controlado tiene lugar durante la ejecución de una solicitud. Específicamente, permite a los desarrolladores configurar páginas de error html que se mostrarán en lugar de un seguimiento de pila de errores. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> <sessionState timeout="40" /> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="WarnAsError" value="false" /> </compiler> </compilers> </system.codedom> <!-- La sección webServer del sistema es necesaria para ejecutar ASP.NET AJAX en Internet Information Services 7.0. Sin embargo, no es necesaria para la versión anterior de IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> <tracing> <traceFailedRequests> <add path="*"> <traceAreas> <add provider="ASP" verbosity="Verbose" /> <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" /> <add provider="ISAPI Extension" verbosity="Verbose" /> <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module" verbosity="Verbose" /> </traceAreas> <failureDefinitions statusCodes="401.3,500,403,404,405" /> </add> </traceFailedRequests> </tracing> <security> <authorization> <add accessType="Allow" users="germanbarbosa,informatica" /> </authorization> <authentication> <windowsAuthentication enabled="false" /> </authentication> </security> </system.webServer> <system.web.extensions> <scripting> <webServices> <authenticationService enabled="true" requireSSL="true" /> <profileService enabled="true" readAccessProperties="FirstName,LastName,PhoneNumber" /> <roleService enabled="true" /> </webServices> </scripting> </system.web.extensions> <system.serviceModel> <services> <!-- this enables the WCF AuthenticationService endpoint --> <service behaviorConfiguration="AppServiceBehaviors" name="System.Web.ApplicationServices.AuthenticationService"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="userHttps" bindingNamespace="http://asp.net/ApplicationServices/v200" contract="System.Web.ApplicationServices.AuthenticationService" /> </service> <!-- this enables the WCF RoleService endpoint --> <service behaviorConfiguration="AppServiceBehaviors" name="System.Web.ApplicationServices.RoleService"> <endpoint binding="basicHttpBinding" bindingConfiguration="userHttps" bindingNamespace="http://asp.net/ApplicationServices/v200" contract="System.Web.ApplicationServices.RoleService" /> </service> <!-- this enables the WCF ProfileService endpoint --> <service behaviorConfiguration="AppServiceBehaviors" name="System.Web.ApplicationServices.ProfileService"> <endpoint binding="basicHttpBinding" bindingNamespace="http://asp.net/ApplicationServices/v200" bindingConfiguration="userHttps" contract="System.Web.ApplicationServices.ProfileService" /> </service> </services> <bindings> <basicHttpBinding> <!-- Set up a binding that uses Username as the client credential type --> <binding name="userHttps"> <security mode="Transport"> </security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="AppServiceBehaviors"> <serviceMetadata httpGetEnabled="false" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider" membershipProviderName="SqlMembershipProvider" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> </system.serviceModel> </configuration>

    Read the article

  • unrecognized selector sent to instance in xcode using objective c and sup as backend

    - by user1765037
    I am a beginner in native development.I made a project using xcode in objective C.It builded successfully.But when I run the project ,an error came like 'unrecognized selector sent to instance'.Why this is happening ?can anyone help me to solve this?I am attaching the error that I am getting with this.. And I am posting the code with this.... ConnectionController.m #import "ConnectionController.h" #import "SUPApplication.h" #import "Flight_DetailsFlightDB.h" #import "CallbackHandler.h" @interface ConnectionController() @property (nonatomic,retain)CallbackHandler *callbackhandler; @end @implementation ConnectionController @synthesize callbackhandler; static ConnectionController *appConnectionController; //Begin Application Setup +(void)beginApplicationSetup { if(!appConnectionController) { appConnectionController = [[[ConnectionController alloc]init]retain]; appConnectionController.callbackhandler = [[CallbackHandler getInstance]retain]; } if([SUPApplication connectionStatus] == SUPConnectionStatus_DISCONNECTED) [appConnectionController setupApplicationConnection]; else NSLog(@"Already Connected"); } -(BOOL)setupApplicationConnection { SUPApplication *app = [SUPApplication getInstance]; [app setApplicationIdentifier:@"HWC"]; [app setApplicationCallback:self.callbackhandler]; NSLog(@"inside setupApp"); SUPConnectionProperties *properties = [app connectionProperties]; NSLog(@"server"); [properties setServerName:@"sapecxxx.xxx.com"]; NSLog(@"inside setupAppser"); [properties setPortNumber:5001]; NSLog(@"inside setupApppot"); [properties setFarmId:@"0"]; NSLog(@"inside setupAppfarm"); [properties setUrlSuffix:@"/tm/?cid=%cid%"]; NSLog(@"inside setupAppurlsuff"); [properties setNetworkProtocol:@"http"]; SUPLoginCredentials *loginCred = [SUPLoginCredentials getInstance]; NSLog(@"inside setupAppmac"); [loginCred setUsername:@"mac"]; [loginCred setPassword:nil]; [properties setLoginCredentials:loginCred]; [properties setActivationCode:@"1234"]; if(![Flight_DetailsFlightDB databaseExists]) { [Flight_DetailsFlightDB createDatabase]; } SUPConnectionProfile *connprofile = [Flight_DetailsFlightDB getSynchronizationProfile]; [connprofile setNetworkProtocol:@"http"]; NSLog(@"inside setupAppPort2"); [connprofile setPortNumber:2480]; NSLog(@"inside setupAppser2"); [connprofile setServerName:@"sapecxxx.xxx.com"]; NSLog(@"inside setupAppdom2"); [connprofile setDomainName:@"Development"]; NSLog(@"inside setupAppuser"); [connprofile setUser:@"supAdmin"]; [connprofile setPassword:@"s3pAdmin"]; [connprofile setAsyncReplay:YES]; [connprofile setClientId:@"0"]; // [Flight_DetailsFlightDB beginOnlineLogin:@"supAdmin" password:@"s3pAdmin"]; [Flight_DetailsFlightDB registerCallbackHandler:self.callbackhandler]; [Flight_DetailsFlightDB setApplication:app]; if([SUPApplication connectionStatus] == SUPRegistrationStatus_REGISTERED) { [app startConnection:0]; } else { [app registerApplication:0]; } } @end ViewController.m #import "Demo_FlightsViewController.h" #import "ConnectionController.h" #import "Flight_DetailsFlightDB.h" #import "SUPObjectList.h" #import "Flight_DetailsSessionPersonalization.h" #import "Flight_DetailsFlight_MBO.h" #import "Flight_DetailsPersonalizationParameters.h" @interface Demo_FlightsViewController () @end @implementation Demo_FlightsViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } -(IBAction)Connect:(id)sender { @try { [ConnectionController beginApplicationSetup]; } @catch (NSException *exception) { NSLog(@"ConnectionAborted"); } // synchronise } -(IBAction)Synchronise:(id)sender { @try { [Flight_DetailsFlightDB synchronize]; NSLog(@"SYNCHRONISED"); } @catch (NSException *exception) { NSLog(@"Synchronisation Failed"); } } -(IBAction)findall:(id)sender { SUPObjectList *list = [Flight_DetailsSessionPersonalization findAll]; NSLog(@"no of lines got synchronised is %d",list.size); } -(IBAction)confirm:(id)sender { Flight_DetailsPersonalizationParameters *pp = [Flight_DetailsFlightDB getPersonalizationParameters]; MBOLogInfo(@"personalisation parmeter for airline id= %@",pp.Airline_PK); [pp setAirline_PK:@"AA"]; [pp save]; while([Flight_DetailsFlightDB hasPendingOperations]) { [NSThread sleepForTimeInterval:1]; } NSLog(@"inside confirm............"); [Flight_DetailsFlightDB beginSynchronize]; Flight_DetailsFlight_MBO *flight = nil; SUPObjectList *cl = nil; cl =[Flight_DetailsFlight_MBO findAll]; if(cl && cl.length > 0) { int i; for(i=0;i<cl.length;i++) { flight = [cl item:i]; if(flight) { NSLog(@"details are %@",flight.CITYFROM); } } } } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end SUPConnectionProfile.h #import "sybase_sup.h" #define FROM_IMPORT_THREAD TRUE #define FROM_APP_THREAD FALSE #define SUP_UL_MAX_CACHE_SIZE 10485760 @class SUPBooleanUtil; @class SUPNumberUtil; @class SUPStringList; @class SUPStringUtil; @class SUPPersistenceException; @class SUPLoginCertificate; @class SUPLoginCredentials; @class SUPConnectionProfile; /*! @class SUPConnectionProfile @abstract This class contains fields and methods needed to connect and authenticate to an SUP server. @discussion */ @interface SUPConnectionProfile : NSObject { SUPConnectionProfile* _syncProfile; SUPBoolean _threadLocal; SUPString _wrapperData; NSMutableDictionary* _delegate; SUPLoginCertificate* _certificate; SUPLoginCredentials* _credentials; int32_t _maxDbConnections; BOOL initTraceCalled; } /*! @method @abstract Return a new instance of SUPConnectionProfile. @discussion @result The SUPconnectionprofile object. */ + (SUPConnectionProfile*)getInstance; /*! @method @abstract Return a new instance of SUPConnectionProfile. @discussion This method is deprecated. use getInstance instead. @result The SUPconnectionprofile object. */ + (SUPConnectionProfile*)newInstance DEPRECATED_ATTRIBUTE NS_RETURNS_NON_RETAINED; - (SUPConnectionProfile*)init; /*! @property @abstract The sync profile. @discussion */ @property(readwrite, retain, nonatomic) SUPConnectionProfile* syncProfile; /*! @property @abstract The maximum number of active DB connections allowed @discussion Default value is 4, but can be changed by application developer. */ @property(readwrite, assign, nonatomic) int32_t maxDbConnections; /*! @method @abstract The SUPConnectionProfile manages an internal dictionary of key value pairs. This method returns the SUPString value for a given string. @discussion @param name The string. */ - (SUPString)getString:(SUPString)name; /*! @method @abstract The SUPConnectionProfile manages an internal dictionary of key value pairs. This method returns the SUPString value for a given string. If the value is not found, returns 'defaultValue'. @discussion @param name The string. @param defaultValue The default Value. */ - (SUPString)getStringWithDefault:(SUPString)name:(SUPString)defaultValue; /*! @method @abstract The SUPConnectionProfile manages an internal dictionary of key value pairs. This method returns the SUPBoolean value for a given string. @discussion @param name The string. */ - (SUPBoolean)getBoolean:(SUPString)name; /*! @method @abstract The SUPConnectionProfile manages an internal dictionary of key value pairs. This method returns the SUPBoolean value for a given string. If the value is not found, returns 'defaultValue'. @discussion @param name The string. @param defaultValue The default Value. */ - (SUPBoolean)getBooleanWithDefault:(SUPString)name:(SUPBoolean)defaultValue; /*! @method @abstract The SUPConnectionProfile manages an internal dictionary of key value pairs. This method returns the SUPInt value for a given string. @discussion @param name The string. */ - (SUPInt)getInt:(SUPString)name; /*! @method @abstract The SUPConnectionProfile manages an internal dictionary of key value pairs. This method returns the SUPInt value for a given string. If the value is not found, returns 'defaultValue'. @discussion @param name The string. @param defaultValue The default Value. */ - (SUPInt)getIntWithDefault:(SUPString)name:(SUPInt)defaultValue; /*! @method getUPA @abstract retrieve upa from profile @discussion if it is in profile's dictionary, it returns value for key "upa"; if it is not found in profile, it composes the upa value from base64 encoding of username:password; and also inserts it into profile's dictionary. @param none @result return string value of upa. */ - (SUPString)getUPA; /*! @method @abstract Sets the SUPString 'value' for the given 'name'. @discussion @param name The name. @param value The value. */ - (void)setString:(SUPString)name:(SUPString)value; /*! @method @abstract Sets the SUPBoolean 'value' for the given 'name'. @discussion @param name The name. @param value The value. */ - (void)setBoolean:(SUPString)name:(SUPBoolean)value; /*! @method @abstract Sets the SUPInt 'value' for the given 'name'. @discussion @param name The name. @param value The value. */ - (void)setInt:(SUPString)name:(SUPInt)value; /*! @method @abstract Sets the username. @discussion @param value The value. */ - (void)setUser:(SUPString)value; /*! @method @abstract Sets the password. @discussion @param value The value. */ - (void)setPassword:(SUPString)value; /*! @method @abstract Sets the ClientId. @discussion @param value The value. */ - (void)setClientId:(SUPString)value; /*! @method @abstract Returns the databasename. @discussion @param value The value. */ - (SUPString)databaseName; /*! @method @abstract Sets the databasename. @discussion @param value The value. */ - (void)setDatabaseName:(SUPString)value; @property(readwrite,copy, nonatomic) SUPString databaseName; /*! @method @abstract Gets the encryption key. @discussion @result The encryption key. */ - (SUPString)getEncryptionKey; /*! @method @abstract Sets the encryption key. @discussion @param value The value. */ - (void)setEncryptionKey:(SUPString)value; @property(readwrite,copy, nonatomic) SUPString encryptionKey; /*! @property @abstract The authentication credentials (username/password or certificate) for this profile. @discussion */ @property(retain,readwrite,nonatomic) SUPLoginCredentials *credentials; /*! @property @abstract The authentication certificate. @discussion If this is not null, certificate will be used for authentication. If this is null, credentials property (username/password) will be used. */ @property(readwrite,retain,nonatomic) SUPLoginCertificate *certificate; @property(readwrite, assign, nonatomic) BOOL initTraceCalled; /*! @method @abstract Gets the UltraLite collation creation parameter @discussion @result conllation string */ - (SUPString)getCollation; /*! @method @abstract Sets the UltraLite collation creation parameter @discussion @param value The value. */ - (void)setCollation:(SUPString)value; @property(readwrite,copy, nonatomic) SUPString collation; /*! @method @abstract Gets the maximum cache size in bytes; the default value for iOS is 10485760 (10 MB). @discussion @result max cache size */ - (int)getCacheSize; /*! @method @abstract Sets the maximum cache size in bytes. @discussion For Ultralite, passes the cache_max_size property into the connection parameters for DB connections; For SQLite, executes the "PRAGMA cache_size" statement when a connection is opened. @param cacheSize value */ - (void)setCacheSize:(int)cacheSize; @property(readwrite,assign, nonatomic) int cacheSize; /*! @method @abstract Returns the user. @discussion @result The username. */ - (SUPString)getUser; /*! @method @abstract Returns the password hash value. @discussion @result The password hash value. */ - (NSUInteger)getPasswordHash; /*! @method @abstract Returns the password. @discussion @result The password hash value. */ - (NSString*)getPassword; /*! @method @abstract Adds a new key value pair. @discussion @param key The key. @param value The value. */ - (void)add:(SUPString)key:(SUPString)value; /*! @method @abstract Removes the key. @discussion @param key The key to remove. */ - (void)remove:(SUPString)key; - (void)clear; /*! @method @abstract Returns a boolean indicating if the key is present. @discussion @param key The key. @result The result indicating if the key is present. */ - (SUPBoolean)containsKey:(SUPString)key; /*! @method @abstract Returns the item for the given key. @discussion @param key The key. @result The item. */ - (SUPString)item:(SUPString)key; /*! @method @abstract Returns the list of keys. @discussion @result The keylist. */ - (SUPStringList*)keys; /*! @method @abstract Returns the list of values. @discussion @result The value list. */ - (SUPStringList*)values; /*! @method @abstract Returns the internal map of key value pairs. @discussion @result The NSMutableDictionary with key value pairs. */ - (NSMutableDictionary*)internalMap; /*! @method @abstract Returns the domain name. @result The domain name. @discussion */ - (SUPString)getDomainName; /*! @method @abstract Sets the domain name. @param value The domain name. @discussion */ - (void)setDomainName:(SUPString)value; /*! @method @abstract Get async operation replay property. Default is true. @result YES : if ansync operation replay is enabled; NO: if async operation is disabled. @discussion */ - (BOOL) getAsyncReplay; /*! @method @abstract Set async operation replay property. Default is true. @result value: enable/disable async replay operation. @discussion */ - (void) setAsyncReplay:(BOOL) value; /*! @method @abstract enable or disable the trace in client object API. @param enable - YES: enable the trace; NO: disable the trace. @discussion */ - (void)enableTrace:(BOOL)enable; /*! @method @abstract enable or disable the trace with payload info in client object API. @param enable - YES: enable the trace; NO: disable the trace. @param withPayload = YES: show payload information; NO: not show payload information. @discussion */ - (void)enableTrace:(BOOL)enable withPayload:(BOOL)withPayload; /*! @method @abstract initialize trace levels from server configuration. @discussion */ - (void)initTrace; - (void)dealloc; /* ultralite/mobilink required parameters */ - (SUPString)getNetworkProtocol; - (void)setNetworkProtocol:(SUPString)protocol; - (SUPString)getNetworkStreamParams; - (void)setNetworkStreamParams:(SUPString)stream; - (SUPString)getServerName; - (void)setServerName:(SUPString)name; - (int)getPortNumber; - (void)setPortNumber:(int)port; - (int)getPageSize; - (void)setPageSize:(int)size; @end @interface SUPConnectionProfile(internal) - (void)applyPropertiesFromApplication; @end We are using SUP 2.1.3 library files.Please go through the code and help me...

    Read the article

< Previous Page | 410 411 412 413 414