Search Results

Search found 5157 results on 207 pages for 'checking'.

Page 147/207 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Verizon overbilling me ?

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • VzAccess Manager.

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • file system damage

    - by jffrs
    I try recover the backup superblock on /dev/sda2 that contain ubuntu 12.04 LTS and partition ext4 with livecd ubuntu 10.04. the message is below root@ubuntu:/home/ubuntu# fsck.ext4 -b 163840 -B 4096 /dev/sda2 e2fsck 1.41.11 (14-Mar-2010) /dev/sda2 was not cleanly unmounted, check forced. Resize inode not valid. Recreate? yes Pass 1: Checking inodes, blocks, and sizes Programming error? block #7963637 claimed for no reason in process_bad_block. Programming error? block #11240437 claimed for no reason in process_bad_block. Root inode is not a directory. Clear? yes Inode 712 is in extent format, but superblock is missing EXTENTS feature Fix? yes Inode 98519 has compression flag set on filesystem without compression support. Clear? yes Inode 98519 has INDEX_FL flag set but is not a directory. Clear HTree index? what's the correct procedure?

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • Cannot open /dev/rfcomm1 : Host is down

    - by srj0408
    I am working on raspberry PI and on Bluetooth. I am using old raspberry pi kernel as the new one has got some bugs that were not resolved with respect to the bluez daemon. At present my kernel version is 3.6.11. I am using a USB bluetooth dongle and my sole purpose is to auto connect the bluetooth dongle when ever it is in range. For that i think i have to run a script in the backend on RPI that will keep on checking the existence of usb bluetooth dongle. I started from the very scratch. I installed bluez daemon using apt-get install bluetooth bluez utils blueman and then i used hciconfig which gives me that my bluetooth usb dongle is working fine. But when i did hcitool scan , it give me no device in range even though my Serial bluetooth Device was on. I wasn't able to find any device in vicinity. Also when i unplugged and plug the USB dongle again, i was able to scan the serial device , but when i repeat the process, i find the earlier condition of not finding any deice. I had find another useful link, but that need address of the bluetooth device that need to be connected. I want to automate this using hcitool scan, storing the output to the a file and then comparing it with already paired devices and their name. For that i need to figure out why hcitool scan is sometime working and sometime not. ? Can some one help me in figuring out why this is happening. Is there any problem on hardware side i.e Bluetooth dongle is buggy or i had some problem in bluez utils. Edit 1: While as of now, hcitool scan is giving me my remote device address but still i am getting the same issue of HOUST IS DOWN, '/dev/rfcomm1'. I am really not getting any idea of what to be done.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Validating textboxes and checkboxes then add the values of those checkboxes

    - by TiTi Nguyen
    I am very new to Javascript. I am running to a problem and don't know how to solve it. Could you please help? Basically, I want to create some textboxes and checkboxes in a form. Then I have to validate those fields, and add the values of the checkboxes if they are selected. One of the textboxes asking for how many semesters attended, and 3 checkboxes with value of 100, 1000, and 750. Whichever checkbox is selected, it should multiply its value to the number of semesters attended. For example if the first two checkboxes are selected then totalCost = (100+1000)* semester. Here is my code: User Name: <label>User Address: <input type = "text" id ="address" size = "30"/></label> <br/><br/> <label> User E-mail address: <input type = "text" id ="email" size = "30"/></label> <br/><br/> <label> User Phone number: <input type = "text" id ="phone" size = "30"/></label> <br/><br/> <label> User area code: <input type = "text" id ="area" size = "30"/></label> <br/><br/> <label> User SSN: <input type = "text" id ="ssn" size = "30"/></label> <br/><br/> <label> User Birthday: <input type = "text" id ="birthday" size = "30"/></label> <br/><br/> <label> Number of semester attended: <input type = "text" id ="semester" size = "3"/></label> <br/><br/> <label><input type="checkbox" id="box_book" value="100"/>Books $100 per semester</label> <br/> <label><input type="checkbox" id="box_tuition" value="1000"/>Tuition $1000 per semester</label> <br/> <label><input type="checkbox" id="box_room" value="750"/>Room and Board $750 per semester</label> <br/> <input type="reset" id="reset"/> <input type="submit" id="submit" onclick="checking()"/> <p/> </form> function checking() { var name=document.forms["myForm"]["name"].value; var address=document.forms["myForm"]["address"].value; var email=document.forms["myForm"]["email"].value; var atpos=email.indexOf("@"); var dotpos=email.lastIndexOf("."); var phone=document.forms["myForm"]["phone"].value; var area=document.forms["myForm"]["area"].value; var ssn=document.forms["myForm"]["ssn"].value; var birth=document.forms["myForm"]["birthday"].value; var semester=document.forms["myForm"]["semester"].value; var boxBook = document.forms["myForm"]["box_book"].value; var boxTuition = document.forms["myForm"]["box_tuition"].value; var boxRoom = document.forms["myForm"]["box_room"].value; if (name==null || name=="") { alert("Please fill in your name."); return false; } if (address==null || address=="") { alert("Please fill in your address."); return false; } if (atpos<1 || dotpos<atpos+2 || dotpos+2>=email.length) { alert("The email (" + email + ") is not a valid e-mail address. Please reenter your email address."); return false; } if(phone.length!=10) { alert("Phone number entered in incorrect form. Please reenter phone number in the correct form which contains 10 numbers."); return false; } if (area==null || area=="") { alert("Please fill in the area code"); return false; } if(ssn.length!=9) { alert("SSN entered in incorrect form. Please reenter SSN."); return false; } if (birth==null || birth=="") { alert("Please fill in your date of birth."); return false; } if (semester==null || semester=="") { alert("How many semester have you attended?"); return false; } if (document.getElementById("box_book").checked == false && document.getElementById("box_tuition").checked == false && document.getElementById("box_room").checked == false) { alert("You must select one of the checkboxes"); return false; } if (document.getElementById("box_book").checked ==true) { var subcost = boxBook; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_book").checked == true && document.getElementById("box_tuition").checked == true) { var subcost = boxBook + boxTuition; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_book").checked == true && document.getElementById("box_tuition").checked == true && document.getElementById("box_room").checked == true) { var subcost = boxBook + boxTuition + boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_tuition").checked ==true) { var subcost = boxTuition; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_tuition").checked == true && document.getElementById("box_room").checked == true) { var subcost = boxTuition + boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_room").checked ==true) { var subcost = boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_book").checked == true && document.getElementById("box_room").checked == true) { var subcost = boxBook + boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } else return false; } When I hit the submit button, nothing happens!! Please help.

    Read the article

  • Create Downloadable CSV File from PHP Script

    - by Aphex22
    How would I create a formatted version of the following PHP script as a downloadable CSV file from the code below (1.0) At the moment the fputcsv function is currently dumping the unparsed PHP/HTML code into a CSV file. This is incorrect. The downloaded CSV file should contain the columns and rows generated from the code at (1.0) as shown in the image link below. I've tried using the following code at the top of the PHP file: // output headers so that the file is downloaded rather than displayed header('Content-Type: text/csv; charset=utf-8'); header('Content-Disposition: attachment; filename=amazon.csv'); // create a file pointer connected to the output stream $output = fopen('php://output', 'w'); $mysql_hostname = ""; $mysql_user = ""; $mysql_password = ""; $mysql_database = ""; $bd = mysql_connect($mysql_hostname, $mysql_user, $mysql_password) or die("Could not connect database"); mysql_select_db($mysql_database, $bd) or die("Could not select database"); $sql = "select * from product WHERE on_amazon = 'on' AND active = 'on'"; $result = mysql_query($sql) or die ( mysql_error() ); // loop over the rows, outputting them while ($sql_result = mysql_fetch_assoc($sql)) fputcsv($output, $sql_result); 1.0 The start of the code outputs the column headings for the CSV file: // set headers echo " item_sku, external_product_id, external_product_id_type, item_name, brand_name, manufacturer, product_description, feed_product_type, update_delete, part_number, model, standard_price, list_price, currency, quantity, product_tax_code, product_site_launch_date, merchant_release_date, restock_date ... <br>"; And then follows PHP script for the column values // load all stock while ($line = mysql_fetch_assoc($result) ) { ?> <?php $size_suffix = array ("",'_chain','_con_b','_con_c'); $arrayLength = count ($size_suffix); for($y=0;$y<$arrayLength;$y++) { //Possible size array to loop through when checking quantity $con_size = array (36,365,37,375,38,385,39,395,40,405,41,415,42,425,43,435,44,445,45,455,46,465,47,475,48,485); $arrlength=count($con_size); for($x=0;$x<$arrlength;$x++) { // check if size is available if($line['quantity_c_size_'.$con_size[$x].$size_suffix[$y]] > 0 ) { ?> <!-- item sku --> <?=$line['product_id']?>, <!-- external product id --> <?=$line['code_size_'.$con_size[$x].'']?>, <? // external product id type $barcode = $line['code_size_'.$con_size[$x]]; $trim_barcode = trim($barcode); $count = strlen($trim_barcode); if ($count == 12) { echo "UPC"; } if ($count == 13) { echo "EAN"; } elseif ($count < 12) { echo " "; } ?>, <!-- item name --> <?=$line['title']?>, <? // brand_name $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <? // manufacturer $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <!-- product description --> <?=preg_replace('/[^\da-z]/i', ' ', $line['amazon_desc']) ?>, <!-- feed product type --> Shoes, , , , <!-- standard price --> <?=$line['price']?>, , <!-- currency --> GBP, <!-- quantity --> <?=$line['quantity_size_'.$con_size[$x].$size_suffix[$y]]?>, , <!-- product site launch date --> <?=$line['added_y']?>-<?=$line['added_m']?>-<?=$line['added_d']?>, <!-- merchat release date --> <?=$line['added_y']?>-<?=$line['added_m']?>-<?=$line['added_d']?>, , , , , <!-- item package quantity --> 1, , , , , <!-- fulfillment latency --> 2, <!-- max aggregate ship quantity --> 1, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , <!-- main image url, url1, url2, url3 --> http://www.getashoe.co.uk/full/<?=$line['product_id']?>_1.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_2.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_3.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_4.jpg, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , <!-- heel height --> <?=$line['heel']?>, , , , , , , , , , , <!-- colour name --> <?=$line['colour']?>, <!-- colour map --> <? $colour = preg_replace('/[()]/i', ' ', $line['colour']); if (preg_match( '/[\/].*/i', $colour)) { echo 'Multicolour'; } if (preg_match( '/off.*/i', $colour)) { echo 'Off-White'; } elseif( preg_match( '/white.*/i', $colour)) { echo 'White'; } elseif( preg_match( '/moro.*/i', $colour)) { echo 'Brown'; } elseif( preg_match( '/morado.*/i', $colour)) { echo 'Purple'; } elseif( preg_match( '/cream.*/i', $colour)) { echo 'Off-White'; } elseif( preg_match( '/pewter.*/i', $colour)) { echo 'Silver'; } elseif( preg_match( '/yellow.*/i', $colour)) { echo 'Yellow'; } elseif( preg_match( '/camel.*/i', $colour)) { echo 'Beige'; } elseif( preg_match( '/navy.*/i', $colour)) { echo 'Blue'; } elseif( preg_match( '/tan.*/i', $colour)) { echo 'Brown'; } elseif( preg_match( '/rainbow.*/i', $colour)) { echo 'Multicolour'; } elseif( preg_match( '/orange.*/i', $colour)) { echo 'Orange'; } elseif( preg_match( '/leopard.*/i', $colour)) { echo 'Multicolour'; } elseif( preg_match( '/red.*/i', $colour)) { echo 'Red'; } elseif( preg_match( '/pink.*/i', $colour)) { echo 'Pink'; } elseif( preg_match( '/purple.*/i', $colour)) { echo 'Purple'; } elseif( preg_match( '/blue.*/i', $colour)) { echo 'Blue'; } elseif( preg_match( '/green.*/i', $colour)) { echo 'Green'; } elseif( preg_match( '/brown.*/i', $colour)) { echo 'Brown'; } elseif( preg_match( '/grey.*/i', $colour)) { echo 'Grey'; } elseif( preg_match( '/black.*/i', $colour)) { echo 'Black'; } elseif( preg_match( '/gold.*/i', $colour)) { echo 'Gold'; } elseif( preg_match( '/silver.*/i', $colour)) { echo 'Silver'; } elseif( preg_match( '/multi.*/i', $colour)) { echo 'Multicolour'; } elseif( preg_match( '/beige.*/i', $colour)) { echo 'Beige'; } elseif( preg_match( '/nude.*/i', $colour)) { echo 'Beige'; } ?>, <!-- size name --> <? echo $con_size[$x];?>, <!-- size map --> <? if ($con_size[$x] == 36) { echo "3 UK"; } elseif ($con_size[$x] == 37 ) { echo "4 UK"; } elseif ($con_size[$x] == 38) { echo "5 UK"; } elseif ($con_size[$x] == 39 ) { echo "6 UK"; } elseif ($con_size[$x] == 40 ) { echo "7 UK"; } elseif ($con_size[$x] == 41) { echo "8 UK"; } elseif ($con_size[$x] == 42) { echo "9 UK"; } elseif ($con_size[$x] == 43) { echo "10 UK"; } elseif ($con_size[$x] == 44 ) { echo "11 UK"; } elseif ($con_size[$x] == 45 ) { echo "12 UK"; } elseif ($con_size[$x] == 46 ) { echo "13 UK"; } elseif ($con_size[$x] == 47 ) { echo "14 UK"; } elseif ($con_size[$x] == 48 ) { echo "15 UK"; } elseif ($con_size[$x] == 365) { echo "3.5 UK"; } elseif ($con_size[$x] == 375 ) { echo "4.5 UK"; } elseif ($con_size[$x] == 385) { echo "5.5 UK"; } elseif ($con_size[$x] == 395 ) { echo "6.5 UK"; } elseif ($con_size[$x] == 405 ) { echo "7.5 UK"; } elseif ($con_size[$x] == 415) { echo "8.5 UK"; } elseif ($con_size[$x] == 425) { echo "9.5 UK"; } elseif ($con_size[$x] == 435) { echo "10.5 UK"; } elseif ($con_size[$x] == 445 ) { echo "11.5 UK"; } elseif ($con_size[$x] == 455 ) { echo "12.5 UK"; } elseif ($con_size[$x] == 465 ) { echo "13.5 UK"; } elseif ($con_size[$x] == 475 ) { echo "14.5 UK"; } elseif ($con_size[$x] == 485 ) { echo "15.5 UK"; } ?>, <br> <? // finish checking if size is available } } } ?> I've included an image of how the CSV file should appear. https://i.imgur.com/ZU3IFer.png Any help would be great.

    Read the article

  • File using .net sockets, transferring problem

    - by Sergei
    I have a client and server, client sending file to server. When i transfer files on my computer(in local) everything is ok(try to sen file over 700mb). When i try to sent file use Internet to my friend in the end of sending appears error on server "Input string is not in correct format".This error appears in this expression fSize = Convert::ToUInt64(tokenes[0]); - and i don't mind wht it's appear. File should be transfered and wait other transferring ps: sorry for too much code, but i want to find solution private: void CreateServer() { try{ IPAddress ^ipAddres = IPAddress::Parse(ipAdress); listener = gcnew System::Net::Sockets::TcpListener(ipAddres, port); listener->Start(); clientsocket =listener->AcceptSocket(); bool keepalive = true; array<wchar_t,1> ^split = gcnew array<wchar_t>(1){ '\0' }; array<wchar_t,1> ^split2 = gcnew array<wchar_t>(1){ '|' }; statusBar1->Text = "Connected" ; // while (keepalive) { array<Byte>^ size1 = gcnew array<Byte>(1024); clientsocket->Receive(size1); System::String ^notSplited = System::Text::Encoding::GetEncoding(1251)->GetString(size1); array<String^> ^ tokenes = notSplited->Split(split2); System::String ^fileName = tokenes[1]->ToString(); statusBar1->Text = "Receiving file" ; unsigned long fSize = 0; //IN THIS EXPRESSIN APPEARS ERROR fSize = Convert::ToUInt64(tokenes[0]); if (!Directory::Exists("Received")) Directory::CreateDirectory("Received"); System::String ^path = "Received\\"+ fileName; while (File::Exists(path)) { int dotPos = path->LastIndexOf('.'); if (dotPos == -1) { path += "[1]"; } else { path = path->Insert(dotPos, "[1]"); } } FileStream ^fs = gcnew FileStream(path, FileMode::CreateNew, FileAccess::Write); BinaryWriter ^f = gcnew BinaryWriter(fs); //bytes received unsigned long processed = 0; pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; // Set the initial value of the ProgressBar. pBarFilesTr->Value = 0; pBarFilesTr->Step = 1024; //loop for receive file array<Byte>^ buffer = gcnew array<Byte>(1024); while (processed < fSize) { if ((fSize - processed) < 1024) { int bytes ; array<Byte>^ buf = gcnew array<Byte>(1024); bytes = clientsocket->Receive(buf); if (bytes != 0) { f->Write(buf, 0, bytes); processed = processed + (unsigned long)bytes; pBarFilesTr->PerformStep(); } break; } else { int bytes = clientsocket->Receive(buffer); if (bytes != 0) { f->Write(buffer, 0, 1024); processed = processed + 1024; pBarFilesTr->PerformStep(); } else break; } } statusBar1->Text = "File was received" ; array<Byte>^ buf = gcnew array<Byte>(1); clientsocket->Send(buf,buf->Length,SocketFlags::None); f->Close(); fs->Close(); SystemSounds::Beep->Play(); } }catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } catch(System::Exception ^es) { MessageBox::Show(es->ToString()); } } private: void CreateClient() { clientsock = gcnew System::Net::Sockets::TcpClient(ipAdress, port); ns = clientsock->GetStream(); sr = gcnew StreamReader(ns); statusBar1->Text = "Connected" ; } private:void Send() { try{ OpenFileDialog ^openFileDialog1 = gcnew OpenFileDialog(); System::String ^filePath = ""; System::String ^fileName = ""; //file choose dialog if (openFileDialog1->ShowDialog() == System::Windows::Forms::DialogResult::OK) { filePath = openFileDialog1->FileName; fileName = openFileDialog1->SafeFileName; } else { MessageBox::Show("You must select a file", "Error", MessageBoxButtons::OK, MessageBoxIcon::Exclamation); return; } statusBar1->Text = "Sending file" ; NetworkStream ^writerStream = clientsock->GetStream(); System::Runtime::Serialization::Formatters::Binary::BinaryFormatter ^format = gcnew System::Runtime::Serialization::Formatters::Binary::BinaryFormatter(); array<Byte>^ buffer = gcnew array<Byte>(1024); FileStream ^fs = gcnew FileStream(filePath, FileMode::Open); BinaryReader ^br = gcnew BinaryReader(fs); //file size unsigned long fSize = (unsigned long)fs->Length; //transfer file size + name bFSize = Encoding::GetEncoding(1251)->GetBytes(Convert::ToString(fs->Length+"|"+fileName+"|")); writerStream->Write(bFSize, 0, bFSize->Length); //status bar pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; pBarFilesTr->Value = 0; // Set the initial value of the ProgressBar. pBarFilesTr->Step = 1024; //bytes transfered unsigned long processed = 0; int bytes = 1024; //loop for transfer while (processed < fSize) { if ((fSize - processed) < 1024) { bytes = (int)(fSize - processed); array<Byte>^ buf = gcnew array<Byte>(bytes); br->Read(buf, 0, bytes); writerStream->Write(buf, 0, buf->Length); pBarFilesTr->PerformStep(); processed = processed + (unsigned long)bytes; break; } else { br->Read(buffer, 0, 1024); writerStream->Write(buffer, 0, buffer->Length); pBarFilesTr->PerformStep(); processed = processed + 1024; } } array<Byte>^ bufsss = gcnew array<Byte>(100); writerStream->Read(bufsss,0,bufsss->Length); statusBar1->Text = "File was sent" ; btnSend->Enabled = true; fs->Close(); br->Close(); SystemSounds::Beep->Play(); newThread->Abort(); } catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } } UPDATE: ok, i can add checking if clientsocket->Receive(size1); equal zero, but why he begin receiving data again , in the ending of receiving. UPDATE:After adding this checking problem remains. AND WIN RAR SAY TO OPENING ARCHIVE - unexpected end of file! UPDATE:http://img153.imageshack.us/img153/3760/erorr.gif I think it continue receiving some bytes from client(that remains in the stream), but why existes cicle while (processed < fSize)

    Read the article

  • Java homework help, Error <identifier> expected

    - by user2900126
    Help with java homework this is my assignment that I have, this assignment code I've tried. But when I try to compile it I keep getting errors which I cant seem to find soloutions too: Error says <identifier> expected for Line 67 public static void () Assignment brief To write a simple java classMobile that models a mobile phone. Details the information stored about each mobile phone will include • Its type e.g. “Sony ericsson x90” or “Samsung Galaxy S”; • Its screen size in inches; You may assume that this a whole number from the scale 3 to 5 inclusive. • Its memory card capacity in gigabytes You may assume that this a whole number • The name of its present service provider You may assume this is a single line of text. • The type of contract with service provider You may assume this is a single line of text. • Its camera resolution in megapixels; You should not assume that this a whole number; • The percentage of charge left on the phone e.g. a fully charged phone will have a charge of 100. You may assume that this a whole number • Whether the phone has GPS or not. Your class will have fields corresponding to these attributes . Start by opening BlueJ, creating a new project called myMobile which has a classMobile and set up the fields that you need, Next you will need to write a Constructor for the class. Assume that each phone is manufactured by creating an object and specifying its type, its screen size, its memory card capacity, its camera resolution and whether it has GPS or not. Therefore you will need a constructor that allows you to pass arguments to initialise these five attributes. Other fields should be set to appropriate default values. You may assume that a new phone comes fully charged. When the phone is sold to its owner, you will need to set the service provider and type of contract with that provider so you will need mutator methods • setProvider () - - to set service provider. • setContractType - - to set the type of contract These methods will be used when the phones provider is changed. You should also write a mutator method ChargeUp () which simulates fully charging the phone. To obtain information about your mobile object you should write • accessor methods corresponding to four of its fields: • getType () – which returns the type of mobile; • getProvider () – which returns the present service provider; • getContractType () – which returns its type of contract; • getCharge () – which returns its remaining charge. An accessor method to printDetails () to print, to the terminal window, a report about the phone e.g. This mobile phone is a sony Erricsson X90 with Service provider BigAl and type of contract PAYG. At present it has 30% of its battery charge remaining. Check that the new method works correctly by for example, • creating a Mobile object and setting its fields; • calling printDetails () and t=checking the report corresponds to the details you have just given the mobile; • changing the service provider and contract type by calling setprovider () and setContractType (); • calling printDetails () and checking the report now prints out the new details. Challenging excercises • write a mutator methodswitchedOnFor () =which simulates using the phone for a specified period. You may assume the phone loses 1% of its charge for each hour that it is switched on . • write an accessor method checkcharge () whichg checks the phone remaing charge. If this charge has a value less than 25%, then this method returns a string containg the message Be aware that you will soon need to re-charge your phone, otherwise it returns a string your phone charge is sufficient. • Write a method changeProvider () which simulates changing the provider (and presumably also the type of service contract). Finally you may add up to four additional fields, with appropriate methods, that might be required in a more detailed model. above is my assignment that I have, this assignment code I've tried. But when I try to oompile it I keep getting errors which I cant seem to find soloutions too: Error says <identifier> expected for Line 67 public static void () /** * to write a simple java class Mobile that models a mobile phone. * * @author (Lewis Burte-Clarke) * @version (14/10/13) */ public class Mobile { // type of phone private String phonetype; // size of screen in inches private int screensize; // menory card capacity private int memorycardcapacity; // name of present service provider private String serviceprovider; // type of contract with service provider private int typeofcontract; // camera resolution in megapixels private int cameraresolution; // the percentage of charge left on the phone private int checkcharge; // wether the phone has GPS or not private String GPS; // instance variables - replace the example below with your own private int x; // The constructor method public Mobile(String mobilephonetype, int mobilescreensize, int mobilememorycardcapacity,int mobilecameraresolution,String mobileGPS, String newserviceprovider) { this.phonetype = mobilephonetype; this.screensize = mobilescreensize; this.memorycardcapacity = mobilememorycardcapacity; this.cameraresolution = mobilecameraresolution; this.GPS = mobileGPS; // you do not use this ones during instantiation,you can remove them if you do not need or assign them some default values //this.serviceprovider = newserviceprovider; //this.typeofcontract = 12; //this.checkcharge = checkcharge; Mobile samsungPhone = new Mobile("Samsung", "1024", "2", "verizon", "8", "GPS"); 1024 = screensize; 2 = memorycardcapacity; 8 = resolution; GPS = gps; "verizon"=serviceprovider; //typeofcontract = 12; //checkcharge = checkcharge; } // A method to display the state of the object to the screen public void displayMobileDetails() { System.out.println("phonetype: " + phonetype); System.out.println("screensize: " + screensize); System.out.println("memorycardcapacity: " + memorycardcapacity); System.out.println("cameraresolution: " + cameraresolution); System.out.println("GPS: " + GPS); System.out.println("serviceprovider: " + serviceprovider); System.out.println("typeofcontract: " + typeofcontract); } /** * The mymobile class implements an application that * simply displays "new Mobile!" to the standard output. */ public class mymobile { public static void main(String[] args) { System.out.println("new Mobile!"); //Display the string. } } public static void buildPhones(){ Mobile Samsung = new Mobile("Samsung", "3.0", "4gb", "8mega pixels", "GPS"); Mobile Blackberry = new Mobile("Blackberry", "3.0", "4gb", "8mega pixels", "GPS"); Samsung.displayMobileDetails(); Blackberry.displayMobileDetails(); } public static void main(String[] args) { buildPhones(); } } any answers.replies and help would be greatly appreciated as I really lost!

    Read the article

  • Linked List manipulation, issues retrieving data c++

    - by floatfil
    I'm trying to implement some functions to manipulate a linked list. The implementation is a template typename T and the class is 'List' which includes a 'head' pointer and also a struct: struct Node { // the node in a linked list T* data; // pointer to actual data, operations in T Node* next; // pointer to a Node }; Since it is a template, and 'T' can be any data, how do I go about checking the data of a list to see if it matches the data input into the function? The function is called 'retrieve' and takes two parameters, the data and a pointer: bool retrieve(T target, T*& ptr); // This is the prototype we need to use for the project "bool retrieve : similar to remove, but not removed from list. If there are duplicates in the list, the first one encountered is retrieved. Second parameter is unreliable if return value is false. E.g., " Employee target("duck", "donald"); success = company1.retrieve(target, oneEmployee); if (success) { cout << "Found in list: " << *oneEmployee << endl; } And the function is called like this: company4.retrieve(emp3, oneEmployee) So that when you cout *oneEmployee, you'll get the data of that pointer (in this case the data is of type Employee). (Also, this is assuming all data types have the apropriate overloaded operators) I hope this makes sense so far, but my issue is in comparing the data in the parameter and the data while going through the list. (The data types that we use all include overloads for equality operators, so oneData == twoData is valid) This is what I have so far: template <typename T> bool List<T>::retrieve(T target , T*& ptr) { List<T>::Node* dummyPtr = head; // point dummy pointer to what the list's head points to for(;;) { if (*dummyPtr->data == target) { // EDIT: it now compiles, but it breaks here and I get an Access Violation error. ptr = dummyPtr->data; // set the parameter pointer to the dummy pointer return true; // return true } else { dummyPtr = dummyPtr->next; // else, move to the next data node } } return false; } Here is the implementation for the Employee class: //-------------------------- constructor ----------------------------------- Employee::Employee(string last, string first, int id, int sal) { idNumber = (id >= 0 && id <= MAXID? id : -1); salary = (sal >= 0 ? sal : -1); lastName = last; firstName = first; } //-------------------------- destructor ------------------------------------ // Needed so that memory for strings is properly deallocated Employee::~Employee() { } //---------------------- copy constructor ----------------------------------- Employee::Employee(const Employee& E) { lastName = E.lastName; firstName = E.firstName; idNumber = E.idNumber; salary = E.salary; } //-------------------------- operator= --------------------------------------- Employee& Employee::operator=(const Employee& E) { if (&E != this) { idNumber = E.idNumber; salary = E.salary; lastName = E.lastName; firstName = E.firstName; } return *this; } //----------------------------- setData ------------------------------------ // set data from file bool Employee::setData(ifstream& inFile) { inFile >> lastName >> firstName >> idNumber >> salary; return idNumber >= 0 && idNumber <= MAXID && salary >= 0; } //------------------------------- < ---------------------------------------- // < defined by value of name bool Employee::operator<(const Employee& E) const { return lastName < E.lastName || (lastName == E.lastName && firstName < E.firstName); } //------------------------------- <= ---------------------------------------- // < defined by value of inamedNumber bool Employee::operator<=(const Employee& E) const { return *this < E || *this == E; } //------------------------------- > ---------------------------------------- // > defined by value of name bool Employee::operator>(const Employee& E) const { return lastName > E.lastName || (lastName == E.lastName && firstName > E.firstName); } //------------------------------- >= ---------------------------------------- // < defined by value of name bool Employee::operator>=(const Employee& E) const { return *this > E || *this == E; } //----------------- operator == (equality) ---------------- // if name of calling and passed object are equal, // return true, otherwise false // bool Employee::operator==(const Employee& E) const { return lastName == E.lastName && firstName == E.firstName; } //----------------- operator != (inequality) ---------------- // return opposite value of operator== bool Employee::operator!=(const Employee& E) const { return !(*this == E); } //------------------------------- << --------------------------------------- // display Employee object ostream& operator<<(ostream& output, const Employee& E) { output << setw(4) << E.idNumber << setw(7) << E.salary << " " << E.lastName << " " << E.firstName << endl; return output; } I will include a check for NULL pointer but I just want to get this working and will test it on a list that includes the data I am checking. Thanks to whoever can help and as usual, this is for a course so I don't expect or want the answer, but any tips as to what might be going wrong will help immensely!

    Read the article

  • SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password

    - by pinaldave
    In the last four days (April 21-24), I have received calls from friends who told me that they have got strange emails from me. To my surprise, I did not send them any emails. I was not worried until my wife complained that she was not able to find one of the very important folders containing our daughter’s photo that is located in our shared drive. This was alarming in my par, so I started a search around my computer’s folders. Again, please note that I am by no means a security expert. I checked my entire computer with virus and spyware, and strangely, there I found nothing. I tried to think what can cause this happening. I suddenly realized that there was a power outage in my area for about two hours during the days I have mentioned. Back then, my wireless router needed to be reset, and so I did. I had set up my WPA-PSK [TKIP] + WPA2-PSK [AES] very well. My key was very simple ( ‘SQLAuthority1′), and I never thought of changing it. (It is now replaced with a very complex one). While checking the Attached Devices, I found out that there was another very strange computer name and IP attached to my network. And so as soon as I found out that there is strange device attached to my computer, I shutdown my local network. Afterwards, I reconfigured my wireless router with a more complex security key. Since I created the complex password, I noticed that the user is no more connecting to my machine. Subsequently, I figured out that I can also set up Access Control List. I added my networked computer to that list as well. When I tried to connect from an external laptop which was not in the list but with a valid security key, I was not able to access the network, neither able to connect to it. I wasn’t also able to connect using a remote desktop, so I think it was good. If you have received any nasty emails from me (from my gmail account) during the afore-mentioned days, I want to apologize. I am already paying for my negligence of not putting a complex password; by way of losing the important photos of my daughter. I have already checked with my client, whose password I saved in SSMS, so there was no issue at all. In fact, I have decided to never leave any saved password of production server in my SSMS. Here is the tip SQL SERVER – Clear Drop Down List of Recent Connection From SQL Server Management Studio to clean them. I think after doing all this, I am feeling safe right now. However, I believe that safety is an illusion of many times. I need your help and advice if there is anymore I can do to stop unauthorized access. I am seeking advice and help through your comments. Reference : Pinal Dave (http://www.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

  • Oracle Virtualization at Oracle OpenWorld 2012

    - by Chris Kawalek
    Mini-Series Entry 1 of 3: Hands-On Virtualization This is the first entry of a 3 part mini-series aimed at highlighting server and desktop virtualization at this year’s Oracle OpenWorld.  Oracle OpenWorld 2012 is fast approaching! If you are as excited as we are about the fascinating new Oracle virtualization content featured at Oracle OpenWorld 2012, you won’t want to miss this blog mini-series. We will be highlighting sessions that cover advances and innovations in our products, our product strategy and roadmap, and hands on labs for step-by-step instructions from our field and product experts. In the blog mini-series you will learn about: The Oracle Virtualization general keynote session Hands-on labs  Key Oracle server and desktop virtualization sessions In this entry, we will cover the Oracle Virtualization keynote session and the hands-on labs you won't want to miss. General Session: Oracle Virtualization Strategy and Roadmap Session ID: GEN8725 Oracle offers the industry’s most complete and integrated virtualization portfolio enabling organizations to realize benefits beyond simple consolidation as they transform their data centers into flexible cloud-based infrastructures. Join Oracle executives and experts to learn about Oracle’s desktop-to-data-center virtualization solutions, such as the OS, with built-in management integration at all layers that can help you virtualize and manage the complete computing environment, from physical servers to virtual servers and applications. This “don’t-miss” session offers details of the latest product updates and strategy; product roadmaps; integration with enterprise applications; and real-world examples of how Oracle server, desktop, and storage virtualization is benefiting customers. Here are our top picks for Hands-On Labs for Oracle OpenWorld 2012: Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Session ID: HOL9907 This hands-on lab demonstrates the performance (using an industry-standard load tester) and roaming capabilities of Oracle Virtual Desktop Infrastructure with Oracle’s Sun Ray Clients, Apple iPad and other clients. Deploying an IaaS Environment with Oracle VM: Hands-On Lab  Session ID: HOL9558 This hands-on lab takes you through the planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. It covers a range of topics, from planning storage capacity, LUN creation, network bandwidth planning, and best practices to designing and streamlining the environment for ease of management. Learn from deeply experienced field engineers and product experts. Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-On Lab Session ID: HOL9559 This hands-on lab is for application architects or system administrators who will need to deploy and manage Oracle Applications. You’ll learn how Oracle VM Templates can turn you into a power user who can virtualize and deploy complex Oracle Applications in minutes. Longtime field-experienced engineers and product experts will show you, step by step, how to download and import templates and deploy the applications. x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL9870 The purpose of this hands-on lab is to demonstrate the functionality and usage of Oracle’s enterprise cloud infrastructure for x86 with Oracle VM 3.x. It covers:  Creation of VMs Migration of VMs  Quick and easy deployment of Oracle applications with Oracle VM Templates  Usage of the Storage Connect plug-in for the Sun ZFS Storage Appliance You can find these and other great sessions on the Oracle OpenWorld 2012 Content Catalogue. Start checking now to better plan and organize your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. While the hands-on labs allow you to directly interact with Oracle virtualization products, the conference sessions allow you to hear from a wide variety of industry experts on how they're using they technology in real world deployments, solving specific challenges, and more. In tomorrow's entry, we'll start talking about the many conference sessions related to Oracle server and desktop virtualization you can attend during the show. See you then! - The Oracle Virtualization marketing team

    Read the article

  • What are developer's problems with helpful error messages?

    - by Moo-Juice
    It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice, it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on.

    Read the article

  • Silverlight Cream for January 30, 2011 -- #1037

    - by Dave Campbell
    In this Issue: Ollie Riches, Colin Eberhardt, Andrej Tozon, Arik Poznanski, Deborah Kurata(-2-), Jay Kimble, Yochay Kiriaty, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), and Matthias Shapiro. Above the Fold: Silverlight: "Missing Chart Legend" Deborah Kurata WP7: "XNA for Silverlight developers: Part 2 - Text rendering" Peter Kuhn Shoutouts: Timmy Kokke has a post up discussing What’s new in the Expression Design January 2011 preview? From SilverlightCream.com: WP7Contrib: Thread safe ObservableCollection<T> Ollie Riches, one of the two originators of WP7Contrib, has a post up on the WP7C ObservableCollection... what and why. Windows Phone 7 DeferredLoadContentControl Colin Eberhardt's latest is one we should all take notice of... a content control that defers rendering to provide a better user experience... source code is available as are some good external links Andrej Tozon on Hey weigh! WP7 application SilverlightShow interviews WP7 Dev Andrej Tozon and gets his take on his app, challenges, tips, and the future of WP7. A ProgressBar With Text For Windows Phone 7 Arik Poznanski demonstrates putting text up on the progress bar to let your users know what you're up to... and it looks great in the screenshots. Charting in a Silverlight Application using MVVM Deborah Kurata is checking out the Charting control this time around... using the charting control from the toolbox in the MVVM app she built in the last post... C# and VB code as always. Missing Chart Legend Deborah Kurata's latest in the world of Charting and MVVM involves using a custom theme and having your chart legend disappear... never fear, she's gonna tell you how to fix that! Silverlight/WP7 tip: Detecting when in VS Design Mode Jay Kimble has a post up that not only resolves a question you may need answered during development (are you in VS design Mode), but it also helps resolve a class of problem that Jay explains. Windows Phone GPS Emulator Yochay Kiriaty points out that while part of the issues of building a GPS-driven app for WP7 is getting your head around the tools, the next hurdle is testing... and that's what he's really discussing... "Windows Phone GPS Emulator" ... if you're playing with the GPS, you'll want this. XNA for Silverlight developers: Part 2 - Text rendering Peter Kuhn's latest tutorial in his XNA series for Silverlight developers is up at SilverlightShow... in this tutorial, Peter discusses text... it's a vastly different game displaying text in XNA as compared to Silverlight ... check it out and see. OData and Windows Phone 7 Mike Ormond starts you off using OData on your WP7 by showing where to download the libraries, and not stopping until he has an app running that reads an OData feed, plus he plans on continuing the quest in future posts. WP7 ProgressOverlay control in depth: features and customization WindowsPhoneGeek has a couple new posts up. The first one is an in-depth look at the ProgressOverlay control in the Codeing4fun Toolkit... pretty cool to be able to put your logo or app logo up. On Testing Windows Phone 7 Applications – Part II: Dealing with the WP7 Application Model WindowsPhoneGeek also has 5 more WP7 testing tips... and these are a little more technical than the first set, and includes some good external links. Topics include: Tombstoning, Usability, Navigation, Capabilities, and Memory consumption. Fun Theme-Friendly Windows Phone Icon Matthias Shapiro explains how to have your WP7 icon change based on the theme your user has chosen... great examples, and XAML included Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Displaying JSON in your Browser

    - by Rick Strahl
    Do you work with AJAX requests a lot and need to quickly check URLs for JSON results? Then you probably know that it’s a fairly big hassle to examine JSON results directly in the browser. Yes, you can use FireBug or Fiddler which work pretty well for actual AJAX requests, but if you just fire off a URL for quick testing in the browser you usually get hit by the Save As dialog and the download manager, followed by having to open the saved document in a text editor in FireFox. Enter JSONView which allows you to simply display JSON results directly in the browser. For example, imagine I have a URL like this: http://localhost/westwindwebtoolkitweb/RestService.ashx?Method=ReturnObject&format=json&Name1=Rick&Name2=John&date=12/30/2010 typed directly into the browser and that that returns a complex JSON object. With JSONView the result looks like this: No fuss, no muss. It just works. Here the result is an array of Person objects that contain additional address child objects displayed right in the browser. JSONView basically adds content type checking for application/json results and when it finds a JSON result takes over the rendering and formats the display in the browser. Note that it re-formats the raw JSON as well for a nicer display view along with collapsible regions for objects. You can still use View Source to see the raw JSON string returned. For me this is a huge time-saver. As I work with AJAX result data using GET and REST style URLs quite a bit it’s a big timesaver. To quickly and easily display JSON is a key feature in my development day and JSONView for all its simplicity fits that bill for me. If you’re doing AJAX development and you often review URL based JSON results do yourself a favor and pick up a copy of JSONView. Other Browsers JSONView works only with FireFox – what about other browsers? Chrome Chrome actually displays raw JSON responses as plain text without any plug-ins. There’s no plug-in or configuration needed, it just works, although you won’t get any fancy formatting. [updated from comments] There’s also a port of JSONView available for Chrome from here: https://chrome.google.com/webstore/detail/chklaanhfefbnpoihckbnefhakgolnmc It looks like it works just about the same as the JSONView plug-in for FireFox. Thanks for all that pointed this out… Internet Explorer Internet Explorer probably has the worst response to JSON encoded content: It displays an error page as it apparently tries to render JSON as XML: Yeah that seems real smart – rendering JSON as an XML document. WTF? To get at the actual JSON output, you can use View Source. To get IE to display JSON directly as text you can add a Mime type mapping in the registry:   Create a new application/json key in: HKEY_CLASSES_ROOT\MIME\Database\ContentType\application/json Add a string value of CLSID with a value of {25336920-03F9-11cf-8FD0-00AA00686F13} Add a DWORD value of Encoding with a value of 80000 I can’t take credit for this tip – found it here first on Sky Sander’s Blog. Note that the CLSID can be used for just about any type of text data you want to display as plain text in the IE. It’s the in-place display mechanism and it should work for most text content. For example it might also be useful for looking at CSS and JS files inside of the browser instead of downloading those documents as well. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  

    Read the article

  • jQuery Context Menu Plugin and Capturing Right-Click

    - by Ben Griswold
    I was thrilled to find Cory LaViska’s jQuery Context Menu Plugin a few months ago. In very little time, I was able to integrate the context menu with the jQuery Treeview.  I quickly had a really pretty user interface which took full advantage of limited real estate.  And guess what.  As promised, the plugin worked in Chrome, Safari 3, IE 6/7/8, Firefox 2/3 and Opera 9.5.  Everything was perfect and I shipped to the Integration Environment. One thing kept bugging though – right clicks aren’t the standard in a web environment. Sure, when one hovers over the treeview node, the mouse changed from an arrow to a pointer, but without help text most users will certainly left-click rather than right. As I was already doubting the design decision, we did some Mac testing.  The context menu worked in Firefox but not Safari.  Damn.  That’s when I started digging into the Madness of Javascript Mouse Events.  Don’t tell, but it’s complicated.  About as close as one can get to capture the right-click mouse event on all major browsers on Windows and Mac is this: if (event.which == null) /* IE case */ button= (event.button < 2) ? "LEFT" : ((event.button == 4) ? "MIDDLE" : "RIGHT"); else /* All others */ button= (event.which < 2) ? "LEFT" : ((event.which == 2) ? "MIDDLE" : "RIGHT"); Yikes.  The content menu code was simply checking if event.button == 2.  No problem.  Cory offers a jQuery Right Click Plugin which I’m sure works for windows but probably not the Mac either.  (Please note I haven’t verified this.) Anyway, I decided to address my UI design concern and the Safari Mac issue in one swoop.  I decided to make the context menu respond to any mouse click event.  This didn’t take much – especially after seeing how Bill Beckelman updated the library to recognize the left click. First, I added an AnyClick option to the library defaults: // Any click may trigger the dropdown and that's okay // See Javascript Madness: Mouse Events – http: //unixpapa.com/js/mouse.html if (o.anyClick == undefined) o.anyClick = false; And then I trigger the context menu dropdown based on the following conditional: if (evt.button == 2 || o.anyClick) { Nothing tricky about that, right?  Finally, I updated my menu setup to include the AnyClick value, if true: $('.member').contextMenu({ menu: 'memberContextMenu', anyClick: true },             function (action, el, pos) {                 … Now the context menu works in “all” environments if you left, right or even middle click.  Download jQuery Context Menu Plugin for Any Click *Opera 9.5 has an option to allow scripts to detect right-clicks, but it is disabled by default. Furthermore, Opera still doesn’t allow JavaScript to disable the browser’s default context menu which causes a usability conflict.

    Read the article

  • Localization with ASP.NET MVC ModelMetadata

    - by kazimanzurrashid
    When using the DisplayFor/EditorFor there has been built-in support in ASP.NET MVC to show localized validation messages, but no support to show the associate label in localized text, unless you are using the .NET 4.0 with Mvc Future. Lets a say you are creating a create form for Product where you have support both English and German like the following. English German I have recently added few helpers for localization in the MvcExtensions, lets see how we can use it to localize the form. As mentioned in the past that I am not a big fan when it comes to decorate class with attributes which is the recommended way in ASP.NET MVC. Instead, we will use the fluent configuration (Similar to FluentNHibernate or EF CodeFirst) of MvcExtensions to configure our View Models. For example for the above we will using: public class ProductEditModelConfiguration : ModelMetadataConfiguration<ProductEditModel> { public ProductEditModelConfiguration() { Configure(model => model.Id).Hide(); Configure(model => model.Name).DisplayName(() => LocalizedTexts.Name) .Required(() => LocalizedTexts.NameCannotBeBlank) .MaximumLength(64, () => LocalizedTexts.NameCannotBeMoreThanSixtyFourCharacters); Configure(model => model.Category).DisplayName(() => LocalizedTexts.Category) .Required(() => LocalizedTexts.CategoryMustBeSelected) .AsDropDownList("categories", () => LocalizedTexts.SelectCategory); Configure(model => model.Supplier).DisplayName(() => LocalizedTexts.Supplier) .Required(() => LocalizedTexts.SupplierMustBeSelected) .AsListBox("suppliers"); Configure(model => model.Price).DisplayName(() => LocalizedTexts.Price) .FormatAsCurrency() .Required(() => LocalizedTexts.PriceCannotBeBlank) .Range(10.00m, 1000.00m, () => LocalizedTexts.PriceMustBeBetweenTenToThousand); } } As you can we are using Func<string> to set the localized text, this is just an overload with the regular string method. There are few more methods in the ModelMetadata which accepts this Func<string> where localization can applied like Description, Watermark, ShortDisplayName etc. The LocalizedTexts is just a regular resource, we have both English and German:   Now lets see the view markup: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Demo.Web.ProductEditModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> <%= LocalizedTexts.Create %> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= LocalizedTexts.Create %></h2> <%= Html.ValidationSummary(false, LocalizedTexts.CreateValidationSummary)%> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="<%= LocalizedTexts.Create %>" /> </p> </fieldset> <% } %> <div> <%= Html.ActionLink(LocalizedTexts.BackToList, "Index")%> </div> </asp:Content> As we can see that we are using the same LocalizedTexts for the other parts of the view which is not included in the ModelMetadata like the Page title, button text etc. We are also using EditorForModel instead of EditorFor for individual field and both are supported. One of the added benefit of the fluent syntax based configuration is that we will get full compile type checking for our resource as we are not depending upon the string based resource name like the ASP.NET MVC. You will find the complete localized CRUD example in the MvcExtensions sample folder. That’s it for today.

    Read the article

  • &lsquo;Publish&hellip;&rsquo; Resulting in Directory With No Files

    - by ToStringTheory
    I was pulling my hair out with this one…  Which isn’t good considering I have so little of it left!  I had just upgraded to the Windows Azure 1.7 SDK the day before with no problems, and used the upgraded ‘Publish…’ dialog to successfully publish a website to my hard disk for hosting on an internal development server.  However, when trying to deploy another project to my file system, it said it was successful, but there were no files in the directory.  The only difference, the first project was an Azure project, the second was a standard ASP.Net Web Application.  If you installed the Windows Azure 1.7 SDK, you may want to read this. The Problem At first it appears that there is no problem: However you may remember that when publishing a web application, the output window will generally iterate through each of the directories as it copies the files from that directory over.  Sure enough, when looking at the output directory – there are no files, no bin directory, no nothing… Troubleshooting Since one site published and the other did not, I believed that the failure may have been to a failed SQL Server 2012 installation that happened between publish.  I rolled back the installation, however that did not work either.  I also checked the Configuration Manager dialog, and ensured that the projects were selected to actually build (just checking, even though the output said it built them..)  I checked the properties of the solution and the projects, and a selection of files in the project to make sure that they were selected for content…  Nothing seemed to work. I then decided to uninstall the Azure 1.7 SDK to see if that was the culprit.  When I opened the Windows 7 ‘Uninstall a Program’ dialog, I noticed that the Azure SDK came with 2 extra packages that just so happen to be in a Release Candidate state from Microsoft – ‘Microsoft Web Deploy 3.0’ and ‘Microsoft Web Publish – Visual Studio 2010’.  It dawned on me that the publish dialog must not be just for Azure, since it appeared when I tried to deploy the regular web application as well.  Therefore, it must have been an upgrade to the publish mechanism in Visual Studio.  I uninstalled both of the programs and received my old publish dialog once again, and was able to successfully publish the solution above as I had done before. After celebrating solving the problem, I tried reinstalling the Azure package, to see if it would repair the publishing process. Even though it brought back the updated dialogs, it did not publish any files. Instead of uninstalling and retreating, I now KNEW what the cause was, and these were packages not just for Azure. I now knew a product name to search for. The Solution Sure enough, with the correct search term in Google – ‘microsoft web publish no files’, and setting the timeline to 1 week, I found what I needed - Microsoft Connect - Publish Web Application FAILS! (by Andrew Rits). I am surprised that I missed something that ended up being so simple…  In the Configuration Manager, I had the following settings: This is how I had been building and debugging the solution always…  However, apparently when installing the new Web Publishing package, it does things a little differently in its configuration for publishing: You see the difference?  The configuration here is set to ‘x86’ instead of ‘Any CPU’.  Sure enough, as soon as I switched the configuration to ‘Release – Any CPU’, the deployment built and published all of my files as I expected. Conclusion It was a small change, but apparently the new ‘Publish web application’ defaults to the x86 configuration, thereby not copying any of the project/bin files to the publish target directory.  I spent forever trying things, but this small drop down eluded me until I was able to target that the dialog was actually working apparently, I just didn’t have the correct configuration. I hope that this saves you the hours of frustration and hastened hair loss that it caused me…  I also hope that before Microsoft brings this publishing package out of RC status, that they change the behavior of that menu to default to the settings of the old publish menu for the first time. Happy Coding!

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to make software development decisions based on facts

    - by Laila
    We love to hear stories about the many and varied ways our customers use the tools that we develop, but in our earnest search for stories and feedback, we'd rather forgotten that some of our keenest users are fellow RedGaters, in the same building. It was almost by chance that we discovered how the SQL Source Control team were using SmartAssembly. As it happens, there is a separate account (here on Simple-Talk) of how SmartAssembly was used to support the Early Access program; by providing answers to specific questions about how the SQL Source Control product was used. But what really got us all grinning was how valuable the SQL Source Control team found the reports that SmartAssembly was quickly and painlessly providing. So gather round, my friends, and I'll tell you the Tale Of The Framework Upgrade . <strange mirage effect to denote a flashback. A subtle background string of music starts playing in minor key> Kevin and his team were undecided. They weren't sure whether they could move their software product from .NET 2 to .NET 3.5 , let alone to .NET 4. You see, they were faced with having to guess what version of .NET was already installed on the average user's machine, which I'm sure you'll agree is no easy task. Upgrading their code to .NET 3.5 might put a barrier to people trying the tool, which was the last thing Kevin wanted: "what if our users have to download X, Y, and Z before being able to open the application?" he asked. That fear of users having to do half an hour of downloads (.followed by at least ten minutes of installation. followed by a five minute restart) meant that Kevin's team couldn't take advantage of WCF (Windows Communication Foundation). This made them sad, because WCF would have allowed them to write their code in a much simpler way, and in hours instead of days (as was the case with .NET 2). Oh sure, they had a gut feeling that this probably wasn't the case, 3.5 had been out for so many years, but they weren't sure. <background music switches to major key> SmartAssembly Feature Usage Reporting gave Kevin and his team exactly what they needed: hard data on their users' systems, both hardware and software. I was there, I saw it happen, and that's not the sort of thing a woman quickly forgets. I'll always remember his last words (before he went to lunch): "You get lots of free information by just checking a box in SmartAssembly" is what he said. For example, they could see how many CPU cores their customers were using, and found out that they should be making use of parallelism to take advantage of available cores. But crucially, (and this is the moral of my tale, dear reader), Kevin saw that 99% of SQL Source Control's users were on .NET 3.5 or above.   So he knew that they could make the switch and that is was safe to do so. With this reassurance, they could use WCF to not only make development easier, but to also give them a really nice way to do inter-process communication between the Source Control and the SQL Compare products. To have done that on .NET 2.0 was certainly possible <knowing chuckle>, but Microsoft have made it a lot easier with WCF. <strange mirage effect to denote end of flashback> So you see, with Feature Usage Reporting, they finally got the hard evidence they needed to safely make the switch to .NET 3.5, knowing it would not inconvenience their users. And that, my friends, is just the sort of thing we like to hear.

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >