Search Results

Search found 6619 results on 265 pages for 'digital logic'.

Page 195/265 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • How can I create photo effects in Android?

    - by PaulH
    I'd like to make an Android app that lets a user apply cool effects to photos taken with the camera. There are already a few out there, I know, but I'd like to try my own hand at one. I'm trying to figure out the best way to implement these effects. Here are some examples from the excellent Vignette app (which I own): http://www.flickr.com/groups/vignetteforandroid/pool/ I have been googling and stack-overflowing, but so far I've mostly found some references to published papers or books. I am ordering this one from Amazon presently - Digital Image Processing: An Algorithmic Introduction using Java After some reading, I think I have a basic understanding of manipulating the RGB values for all the pixels in the image. My main question is how do I come up with a transformation that produces cool effects? By cool effects I mean some like those in the Vignette app or IPhone apps: ToyCamera Polarize I already have quite a bit of experience with Java, and I've made my first app for android already. Any ideas? Thanks in advance.

    Read the article

  • Passing time from server to client

    - by gskoli
    Dear all, i am creating a digital clock using images & javascript but i want to pass a server time that time ... How to do that ... I am getting time from server and passing it to Date Following i have given snippet . var time_str = document.clock_form.time_str.value ; //alert (time_str); function dotime(){ theTime=setTimeout('dotime();',1000); //d = new Date(Date.parse(time_str)); d= new Date(time_str); hr= d.getHours()+100; mn= d.getMinutes()+100; se= d.getSeconds()+100; var time_str = document.clock_form.time_str.value ; //alert (time_str); alert(' TIME ---> '+hr+' :: '+mn+' :: '+ se); if(hr==100){ hr=112;am_pm='am'; } else if(hr<112){ am_pm='am'; } else if(hr==112){ am_pm='pm'; } else if(hr>112){ am_pm='pm';hr=(hr-12); } tot=''+hr+mn+se; document.hr1.src = '/flash_files/digits/dg'+tot.substring(1,2)+'.gif'; document.hr2.src = '/flash_files/digits/dg'+tot.substring(2,3)+'.gif'; document.mn1.src = '/flash_files/digits/dg'+tot.substring(4,5)+'.gif'; document.mn2.src = '/flash_files/digits/dg'+tot.substring(5,6)+'.gif'; document.se1.src = '/flash_files/digits/dg'+tot.substring(7,8)+'.gif'; document.se2.src = '/flash_files/digits/dg'+tot.substring(8,9)+'.gif'; document.ampm.src= '/flash_files/digits/dg'+am_pm+'.gif'; } dotime(); But it is not working Help me out Thanks in advance.

    Read the article

  • new Date() In javascript :: need Help

    - by gskoli
    Dear all, i am creating a digital clock using images & javascript but i want to pass a server time that time ... How to do that ... I am getting time from server and passing it to Date Following i have given snippet . var time_str = document.clock_form.time_str.value ; //alert (time_str); function dotime(){ theTime=setTimeout('dotime();',1000); //d = new Date(Date.parse(time_str)); d= new Date(time_str); hr= d.getHours()+100; mn= d.getMinutes()+100; se= d.getSeconds()+100; var time_str = document.clock_form.time_str.value ; //alert (time_str); alert(' TIME ---> '+hr+' :: '+mn+' :: '+ se); if(hr==100){ hr=112;am_pm='am'; } else if(hr<112){ am_pm='am'; } else if(hr==112){ am_pm='pm'; } else if(hr>112){ am_pm='pm';hr=(hr-12); } tot=''+hr+mn+se; document.hr1.src = '/flash_files/digits/dg'+tot.substring(1,2)+'.gif'; document.hr2.src = '/flash_files/digits/dg'+tot.substring(2,3)+'.gif'; document.mn1.src = '/flash_files/digits/dg'+tot.substring(4,5)+'.gif'; document.mn2.src = '/flash_files/digits/dg'+tot.substring(5,6)+'.gif'; document.se1.src = '/flash_files/digits/dg'+tot.substring(7,8)+'.gif'; document.se2.src = '/flash_files/digits/dg'+tot.substring(8,9)+'.gif'; document.ampm.src= '/flash_files/digits/dg'+am_pm+'.gif'; } dotime(); But it is not working Help me out Thanks in advance.

    Read the article

  • PHP : How to get specific data from array

    - by Giffary
    Hello! I try to use amazon API using PHP. If I use print_r($parsed_xml->Items->Item->ItemAttributes) it show me some result like SimpleXMLElement Object ( [Binding] = Electronics [Brand] = Canon [DisplaySize] = 2.5 [EAN] = 0013803113662 [Feature] = Array ( [0] = High-powered 20x wide-angle optical zoom with Optical Image Stabilizer [1] = Capture 720p HD movies with stereo sound; HDMI output connector for easy playback on your HDTV [2] = 2.5-inch Vari-Angle System LCD; improved Smart AUTO intelligently selects from 22 predefined shooting situations [3] = DIGIC 4 Image Processor; 12.1-megapixel resolution for poster-size, photo-quality prints [4] = Powered by AA batteries (included); capture images to SD/SDHC memory cards (not included) ) [FloppyDiskDriveDescription] = None [FormFactor] = Rotating [HasRedEyeReduction] = 1 [IsAutographed] = 0 [IsMemorabilia] = 0 [ItemDimensions] = SimpleXMLElement Object ( [Height] = 340 [Length] = 490 [Weight] = 124 [Width] = 350 ) [Label] = Canon [LensType] = Zoom lens [ListPrice] = SimpleXMLElement Object ( [Amount] = 60103 [CurrencyCode] = USD [FormattedPrice] = $601.03 ) [Manufacturer] = Canon [MaximumFocalLength] = 100 [MaximumResolution] = 12.1 [MinimumFocalLength] = 5 [Model] = SX20IS [MPN] = SX20IS [OpticalSensorResolution] = 12.1 [OpticalZoom] = 20 [PackageDimensions] = SimpleXMLElement Object ( [Height] = 460 [Length] = 900 [Weight] = 242 [Width] = 630 ) [PackageQuantity] = 1 [ProductGroup] = Photography [ProductTypeName] = CAMERA_DIGITAL [ProductTypeSubcategory] = point-and-shoot [Publisher] = Canon [Studio] = Canon [Title] = Canon PowerShot SX20IS 12.1MP Digital Camera with 20x Wide Angle Optical Image Stabilized Zoom and 2.5-inch Articulating LCD [UPC] = 013803113662 ) my goal is to get only Feature infomation and I try to use $feature = $parsed_xml->Items->Item->ItemAttributes->Feature it does'not work for me because it just show me the first feature only. How do i get all feature information? please help

    Read the article

  • Query returning related assets

    - by GMo
    I have 2 tables, one is an assets table which holds digital assets (e.g. article, images etc), the 2nd table is an asset_links table which maps 1-1 relationships between assets contained within the assets table. Here are the table definitions: Asset +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | source | varchar(255) | YES | | NULL | | | title | varchar(255) | YES | | NULL | | | date_created | datetime | YES | | NULL | | | date_embargo | datetime | YES | | NULL | | | date_expires | datetime | YES | | NULL | | | date_updated | datetime | YES | | NULL | | | keywords | varchar(255) | YES | | NULL | | | status | int(11) | YES | | NULL | | | priority | int(11) | YES | | NULL | | | fk_site | int(11) | YES | MUL | NULL | | | resource_type | varchar(255) | YES | | NULL | | | resource_id | int(11) | YES | | NULL | | | fk_user | int(11) | YES | MUL | NULL | | +---------------+--------------+------+-----+---------+----------------+ Asset_links +-----------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | asset_id1 | int(11) | YES | | NULL | | | asset_id2 | int(11) | YES | | NULL | | +-----------+---------+------+-----+---------+----------------+ In the asset_links table there are the following rows: 1 - 3, 1 - 4, 2 - 10, 2 - 56 I am looking to write one query which will return all assets which satisfy any asset search criteria and within the same query return all of the linked asset data for linked assets for that asset. e.g. The query returning assets 1 and 2 would return : Asset 1 attributes - Asset 3 attributes - Asset 4 attributes Asset 2 attributes - Asset 10 attributes - Asset 56 attributes What is the best way to write the query?

    Read the article

  • What should I do from here?

    - by Sunscreen
    Hi all, First of all, the site rocks. You can ask and get specific answers, mainly, for programming issues. This question is more generic. I studied Physics for my bachelors and Digital Image Processing for my masters, ended on September 2001. From then on I started working as a developer and software analyst. I worked, and working, witn C, C++, AIX OS, XP OS, MFC 4.21. I also did some data translations from EDIFACT to XML and viceversa. I trained users for the applications that I was running, I created documents (detailed design docs mainly), though most of the time I wrote, and I still write, code. Recently I applied for the best greek, graduate university for my MBA and they accepted me, starting on Jan 2011. I am a developer with no specific insight with the languages I work. I can be very productive with some subsets of the languages that the companies I worked for use, though this is a limited thing for a developer. If I get my MBA I can be a semi-businees analyst ot consultant, as I am now a semi-developer. The problem is that I can do some but not all in a designated working area. What should I do from here? Should I get my MBA and look forswitching industries? Should I read and excersise myself with new languages and frameworks? Should I be more focussed to the deligations from my current job (currently I work with MFC)? Just for the note, I am 32 and I feel I am wasting my time. I am not getting the best that I can get from is current position (and I work here for 3+ years). Thanks all, Sun

    Read the article

  • Best way to sign data in web form with user certificate

    - by salgiza
    We have a C# web app where users will connect using a digital certificate stored in their browsers. From the examples that we have seen, verifying their identity will be easy once we enable SSL, as we can access the fields in the certificate, using Request.ClientCertificate, to check the user's name. We have also been requested, however, to sign the data sent by the user (a few simple fields and a binary file) so that we can prove, without doubt, which user entered each record in our database. Our first thought was creating a small text signature including the fields (and, if possible, the md5 of the file) and encrypt it with the private key of the certificate, but... As far as I know we can't access the private key of the certificate to sign the data, and I don't know if there is any way to sign the fields in the browser, or we have no other option than using a Java applet. And if it's the latter, how we would do it (Is there any open source applet we can use? Would it be better if we create one ourselves?) Of course, it would be better if there was any way to "sign" the fields received in the server, using the data that we can access from the user's certificate. But if not, any info on the best way to solve the problem would be appreciated.

    Read the article

  • Creating 1 page PDF of iPad Screen view - How?

    - by user314695
    Hi All, I've asked this question on a couple other forums and have had zero response, so I'm hoping someone here can help point me in the right direction. I have a pretty simple one screen application for my work. It's basically just a recreation of a 1 page paper report that has a company logo, some labels, a few text boxes and a scroll text box for the report. I need to be able to fill out the report then click a button to save it in a graphical form so I can fax, print or email it later. Currently, I'm just programmatically taking a screen capture and saving it to the photo's library (default for screen capture). Then I can just email it from photo's. This works ok, but is kind of hacky, at best. I've read through the new iPad 3.2 guide for creating PDF's (apparently it's supposed to be much easier than before) but I can not get it to work and I've spent countless hours on it now. I'm hoping someone has the answer for me. Alternatively, if anyone knows how I can redirect where the screen capture is stored (default is in the photo album) then maybe I can make that function work. If I could redirect the screen capture to store in my applications document folder, then I can use MFMailCompose to attach it to an email. Lastly, on a side note, does anyone know of a good way to capture a digital signature via touch. For instance, I'd love to have my users be able to just sign their name via touch at the bottom of the document before I convert to PDF or take a screen capture. Thanks in advance for your help. -Ray

    Read the article

  • C preprocessor problem in Microsoft Visual Studio 2010

    - by Remo.D
    I've encountered a problem with the new Visual C++ in VS 2010. I've got a header with the following defines: #define STC(y) #y #define STR(y) STC(\y) #define NUM(y) 0##y The intent is that you can have some constant around like #define TOKEN x5A and then you can have the token as a number or as a string: NUM(TOKEN) -> 0x5A STR(TOKEN) -> "\x5A" This is the expected behavior under the the substitution rules of macros arguments and so far it has worked well with gcc, open watcom, pellesC (lcc), Digital Mars C and Visual C++ in VS2008 Express. Today I recompiled the library with VS2010 Express only to discover that it doesn't work anymore! Using the new version I would get: NUM(TOKEN) -> 0x5A STR(TOKEN) -> "\y" It seems that the new preprocessor treats \y as an escape sequence even within a macro body which is a non-sense as escape sequences only have a meaning in literal strings. I suspect this is a gray area of the ANSI standard but even if the original behavior was mandated by the standard, MS VC++ is not exactly famous to be 100% ANSI C compliant so I guess I'll have to live with the new behavior of the MS compiler. Given that, does anybody have a suggestion on how to re-implement the original macros behavior with VS2010?

    Read the article

  • Filter design for audio signal.

    - by beanyblue
    What I am trying to do is simple. I have a few .wav files. I want to remove noise and filter out specific frequencies. I don't have matlab and I intend to write my own code for all the filters. Right now, I have a way to read the .wav file and dump out the structure into a text file. My questions are the following: Can I directly apply the digital filters on this sampled data?{ ie, can I directly do a convolution between my input samples and h(n) for the filter function that i choose?). How do I choose the number of coefficients for the Window function? I have octave, so if someone can point me to anything that gives me some idea on how to process the .wav file using octave, that would be great too. I want to be able to filter out the frequency and then listen to the sound again. Is this possible with octave? I'm just a beginner with these kinds of things, so please bear with me if my questions are too naive. Any help will be great.

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Java Cloud Service Integration to REST Service

    - by Jani Rautiainen
    Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance. This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance. In this article a custom application integrating with REST service will be implemented. We will use REST services provided by Taleo as an example; however the same approach will work with any REST service. In this example the data from the REST service is used to populate a dynamic table. Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration.Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source.The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the "Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud" link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guideFor details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Access to a local database The database associated with the JCS instance cannot be connected to with JDBC.  Since creating ADFbc business component requires a JDBC connection we will need access to a local database. 3rd party libraries This example will use some 3rd party libraries for implementing the REST service call and processing the input / output content. Other libraries may also be used, however these are tested to work. Jersey 1.x Jersey library will be used as a client to make the call to the REST service. JCS documentation for supported specifications states: Java API for RESTful Web Services (JAX-RS) 1.1 So Jersey 1.x will be used. Download the single-JAR Jersey bundle; in this example Jersey 1.18 JAR bundle is used. Json-simple Jjson-simple library will be used to process the json objects. Download the  JAR file; in this example json-simple-1.1.1.jar is used. Accessing data in Taleo Before implementing the application it is beneficial to familiarize oneself with the data in Taleo. Easiest way to do this is by using a RESTClient on your browser. Once added to the browser you can access the UI: The client can be used to call the REST services to test the URLs and data before adding them into the application. First derive the base URL for the service this can be done with: Method: GET URL: https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/<company name> The response will contain the base URL to be used for the service calls for the company. Next obtain authentication token with: Method: POST URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/login?orgCode=<company>&userName=<user name>&password=<password> The response includes an authentication token that can be used for few hours to authenticate with the service: {   "response": {     "authToken": "webapi26419680747505890557"   },   "status": {     "detail": {},     "success": true   } } To authenticate the service calls navigate to "Headers -> Custom Header": And add a new request header with: Name: Cookie Value: authToken=webapi26419680747505890557 Once authentication token is defined the tool can be used to invoke REST services; for example: Method: GET URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/object/candidate/search.xml?status=16 This data will be used on the application to be created. For details on the Taleo REST services refer to the Taleo Business Edition REST API Guide. Create Application First Fusion Web Application is created and configured. Start JDeveloper and click "New Application": Application Name: JcsRestDemo Application Package Prefix: oracle.apps.jcs.test Application Template: Fusion Web Application (ADF) Configure Local Cloud Connection Follow the steps documented in the "Java Cloud Service ADF Web Application" article to configure a local database connection needed to create the ADFbc objects. Configure Libraries Add the 3rd party libraries into the class path. Create the following directory and copy the jar files into it: <JDEV_USER_HOME>/JcsRestDemo/lib  Select the "Model" project, navigate "Application -> Project Properties -> Libraries and Classpath -> Add JAR / Directory" and add the 2 3rd party libraries: Accessing Data from Taleo To access data from Taleo using the REST service the 3rd party libraries will be used. 2 Java classes are implemented, one representing the Candidate object and another for accessing the Taleo repository Candidate Candidate object is a POJO object used to represent the candidate data obtained from the Taleo repository. The data obtained will be used to populate the ADFbc object used to display the data on the UI. The candidate object contains simply the variables we obtain using the REST services and the getters / setters for them: Navigate "New -> General -> Java -> Java Class", enter "Candidate" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content: import oracle.jbo.domain.Number; public class Candidate { private Number candId; private String firstName; private String lastName; public Candidate() { super(); } public Candidate(Number candId, String firstName, String lastName) { super(); this.candId = candId; this.firstName = firstName; this.lastName = lastName; } public void setCandId(Number candId) { this.candId = candId; } public Number getCandId() { return candId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Taleo Repository Taleo repository class will interact with the Taleo REST services. The logic will query data from Taleo and populate Candidate objects with the data. The Candidate object will then be used to populate the ADFbc object used to display data on the UI. Navigate "New -> General -> Java -> Java Class", enter "TaleoRepository" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; import com.sun.jersey.core.util.MultivaluedMapImpl; import java.io.StringReader; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import oracle.jbo.domain.Number; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; /** * This class interacts with the Taleo REST services */ public class TaleoRepository { /** * Connection information needed to access the Taleo services */ String _company = null; String _userName = null; String _password = null; /** * Jersey client used to access the REST services */ Client _client = null; /** * Parser for processing the JSON objects used as * input / output for the services */ JSONParser _parser = null; /** * The base url for constructing the REST URLs. This is obtained * from Taleo with a service call */ String _baseUrl = null; /** * Authentication token obtained from Taleo using a service call. * The token can be used to authenticate on subsequent * service calls. The token will expire in 4 hours */ String _authToken = null; /** * Static url that can be used to obtain the url used to construct * service calls for a given company */ private static String _taleoUrl = "https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/"; /** * Default constructor for the repository * Authentication details are passed as parameters and used to generate * authentication token. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * * @param company the company for which the service calls are made * @param userName the user name to authenticate with * @param password the password to authenticate with. */ public TaleoRepository(String company, String userName, String password) { super(); _company = company; _userName = userName; _password = password; _client = Client.create(); _parser = new JSONParser(); _baseUrl = getBaseUrl(); } /** * This obtains the base url for a company to be used * to construct the urls for service calls * @return base url for the service calls */ private String getBaseUrl() { String result = null; if (null != _baseUrl) { result = _baseUrl; } else { try { String company = _company; WebResource resource = _client.resource(_taleoUrl + company); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("URL"); } catch (Exception ex) { ex.printStackTrace(); } } return result; } /** * Generates authentication token, that can be used to authenticate on * subsequent service calls. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * @return authentication token that can be used to authenticate on * subsequent service calls */ private String login() { String result = null; try { MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("orgCode", _company); formData.add("userName", _userName); formData.add("password", _password); WebResource resource = _client.resource(_baseUrl + "login"); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).post(ClientResponse.class, formData); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("authToken"); } catch (Exception ex) { throw new RuntimeException("Unable to login ", ex); } if (null == result) throw new RuntimeException("Unable to login "); return result; } /** * Releases a authentication token. Each call to login must be followed * by call to logout after the processing is done. This is required as * the tokens are limited to 20 per user and if not released the tokens * will only expire after 4 hours. * @param authToken */ private void logout(String authToken) { WebResource resource = _client.resource(_baseUrl + "logout"); resource.header("cookie", "authToken=" + authToken).post(ClientResponse.class); } /** * This method is used to obtain a list of candidates using a REST * service call. At this example the query is hard coded to query * based on status. The url constructed to access the service is: * <_baseUrl>/object/candidate/search.xml?status=16 * @return List of candidates obtained with the service call */ public List<Candidate> getCandidates() { List<Candidate> result = new ArrayList<Candidate>(); try { // First login, note that in finally block we must have logout _authToken = "authToken=" + login(); /** * Construct the URL, the resulting url will be: * <_baseUrl>/object/candidate/search.xml?status=16 */ MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("status", "16"); JSONArray searchResults = (JSONArray)getTaleoResource("object/candidate/search", "searchResults", formData); /** * Process the results, the resulting JSON object is something like * this (simplified for readability): * * { * "response": * { * "searchResults": * [ * { * "candidate": * { * "candId": 211, * "firstName": "Mary", * "lastName": "Stochi", * logic here will find the candidate object(s), obtain the desired * data from them, construct a Candidate object based on the data * and add it to the results. */ for (Object object : searchResults) { JSONObject temp = (JSONObject)object; JSONObject candidate = (JSONObject)findObject(temp, "candidate"); Long candIdTemp = (Long)candidate.get("candId"); Number candId = (null == candIdTemp ? null : new Number(candIdTemp)); String firstName = (String)candidate.get("firstName"); String lastName = (String)candidate.get("lastName"); result.add(new Candidate(candId, firstName, lastName)); } } catch (Exception ex) { ex.printStackTrace(); } finally { if (null != _authToken) logout(_authToken); } return result; } /** * Convenience method to construct url for the service call, invoke the * service and obtain a resource from the response * @param path the path for the service to be invoked. This is combined * with the base url to construct a url for the service * @param resource the key for the object in the response that will be * obtained * @param parameters any parameters used for the service call. The call * is slightly different depending whether parameters exist or not. * @return the resource from the response for the service call */ private Object getTaleoResource(String path, String resource, MultivaluedMap<String, String> parameters) { Object result = null; try { WebResource webResource = _client.resource(_baseUrl + path); ClientResponse response = null; if (null == parameters) response = webResource.header("cookie", _authToken).get(ClientResponse.class); else response = webResource.queryParams(parameters).header("cookie", _authToken).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); result = findObject(jsonObject, resource); } catch (Exception ex) { ex.printStackTrace(); } return result; } /** * Convenience method to recursively find a object with an key * traversing down from a given root object. This will traverse a * JSONObject / JSONArray recursively to find a matching key, if found * the object with the key is returned. * @param root root object which contains the key searched for * @param key the key for the object to search for * @return the object matching the key */ private Object findObject(Object root, String key) { Object result = null; if (root instanceof JSONObject) { JSONObject rootJSON = (JSONObject)root; if (rootJSON.containsKey(key)) { result = rootJSON.get(key); } else { Iterator children = rootJSON.entrySet().iterator(); while (children.hasNext()) { Map.Entry entry = (Map.Entry)children.next(); Object child = entry.getValue(); if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } } else if (root instanceof JSONArray) { JSONArray rootJSON = (JSONArray)root; for (Object child : rootJSON) { if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } return result; } }   Creating Business Objects While JCS application can be created without a local database, the local database is required when using ADFbc objects even if database objects are not referred. For this example we will create a "Transient" view object that will be programmatically populated based the data obtained from Taleo REST services. Creating ADFbc objects Choose the "Model" project and navigate "New -> Business Tier : ADF Business Components : View Object". On the "Initialize Business Components Project" choose the local database connection created in previous step. On Step 1 enter "JcsRestDemoVO" on the "Name" and choose "Rows populated programmatically, not based on query": On step 2 create the following attributes: CandId Type: Number Updatable: Always Key Attribute: checked Name Type: String Updatable: Always On steps 3 and 4 accept defaults and click "Next".  On step 5 check the "Application Module" checkbox and enter "JcsRestDemoAM" as the name: Click "Finish" to generate the objects. Populating the VO To display the data on the UI the "transient VO" is populated programmatically based on the data obtained from the Taleo REST services. Open the "JcsRestDemoVOImpl.java". Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import java.sql.ResultSet; import java.util.List; import java.util.ListIterator; import oracle.jbo.server.ViewObjectImpl; import oracle.jbo.server.ViewRowImpl; import oracle.jbo.server.ViewRowSetImpl; // --------------------------------------------------------------------- // --- File generated by Oracle ADF Business Components Design Time. // --- Tue Feb 18 09:40:25 PST 2014 // --- Custom code may be added to this class. // --- Warning: Do not modify method signatures of generated methods. // --------------------------------------------------------------------- public class JcsRestDemoVOImpl extends ViewObjectImpl { /** * This is the default constructor (do not remove). */ public JcsRestDemoVOImpl() { } @Override public void executeQuery() { /** * For some reason we need to reset everything, otherwise * 2nd entry to the UI screen may fail with * "java.util.NoSuchElementException" in createRowFromResultSet * call to "candidates.next()". I am not sure why this is happening * as the Iterator is new and "hasNext" is true at the point * of the execution. My theory is that since the iterator object is * exactly the same the VO cache somehow reuses the iterator including * the pointer that has already exhausted the iterable elements on the * previous run. Working around the issue * here by cleaning out everything on the VO every time before query * is executed on the VO. */ getViewDef().setQuery(null); getViewDef().setSelectClause(null); setQuery(null); this.reset(); this.clearCache(); super.executeQuery(); } /** * executeQueryForCollection - overridden for custom java data source support. */ protected void executeQueryForCollection(Object qc, Object[] params, int noUserParams) { /** * Integrate with the Taleo REST services using TaleoRepository class. * A list of candidates matching a hard coded query is obtained. */ TaleoRepository repository = new TaleoRepository(<company>, <username>, <password>); List<Candidate> candidates = repository.getCandidates(); /** * Store iterator for the candidates as user data on the collection. * This will be used in createRowFromResultSet to create rows based on * the custom iterator. */ ListIterator<Candidate> candidatescIterator = candidates.listIterator(); setUserDataForCollection(qc, candidatescIterator); super.executeQueryForCollection(qc, params, noUserParams); } /** * hasNextForCollection - overridden for custom java data source support. */ protected boolean hasNextForCollection(Object qc) { boolean result = false; /** * Determines whether there are candidates for which to create a row */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); result = candidates.hasNext(); /** * If all candidates to be created indicate that processing is done */ if (!result) { setFetchCompleteForCollection(qc, true); } return result; } /** * createRowFromResultSet - overridden for custom java data source support. */ protected ViewRowImpl createRowFromResultSet(Object qc, ResultSet resultSet) { /** * Obtain the next candidate from the collection and create a row * for it. */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); ViewRowImpl row = createNewRowForCollection(qc); try { Candidate candidate = candidates.next(); row.setAttribute("CandId", candidate.getCandId()); row.setAttribute("Name", candidate.getFirstName() + " " + candidate.getLastName()); } catch (Exception e) { e.printStackTrace(); } return row; } /** * getQueryHitCount - overridden for custom java data source support. */ public long getQueryHitCount(ViewRowSetImpl viewRowSet) { /** * For this example this is not implemented rather we always return 0. */ return 0; } } Creating UI Choose the "ViewController" project and navigate "New -> Web Tier : JSF : JSF Page". On the "Create JSF Page" enter "JcsRestDemo" as name and ensure that the "Create as XML document (*.jspx)" is checked.  Open "JcsRestDemo.jspx" and navigate to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1" and drag & drop the VO to the "<af:form> " as a "ADF Read-only Table": Accept the defaults in "Edit Table Columns". To execute the query navigate to to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1 -> Operations -> Execute" and drag & drop the operation to the "<af:form> " as a "Button": Deploying to JCS Follow the same steps as documented in previous article"Java Cloud Service ADF Web Application". Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsRestDemo-ViewController-context-root/faces/JcsRestDemo.jspx The UI displays a list of candidates obtained from the Taleo REST Services: Summary In this article we learned how to integrate with REST services using Jersey library in JCS. In future articles various other integration techniques will be covered.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • I need help on my C++ assignment using MS Visual C++

    - by krayzwytie
    Ok, so I don't want you to do my homework for me, but I'm a little lost with this final assignment and need all the help I can get. Learning about programming is tough enough, but doing it online is next to impossible for me... Now, to get to the program, I am going to paste what I have so far. This includes mostly //comments and what I have written so far. If you can help me figure out where all the errors are and how to complete the assignment, I will really appreciate it. Like I said, I don't want you to do my homework for me (it's my final), but any constructive criticism is welcome. This is my final assignment for this class and it is due tomorrow (Sunday before midnight, Arizona time). This is the assignment: Examine the following situation: o Your company, Datamax, Inc., is in the process of automating its payroll systems. Your manager has asked you to create a program that calculates overtime pay for all employees. Your program must take into account the employee’s salary, total hours worked, and hours worked more than 40 in a week, and then provide an output that is useful and easily understood by company management. • Compile your program utilizing the following background information and the code outline in Appendix D (included in the code section). • Submit your project as an attachment including the code and the output. Company Background: o Three employees: Mark, John, and Mary o The end user needs to be prompted for three specific pieces of input—name, hours worked, and hourly wage. o Calculate overtime if input is greater than 40 hours per week. o Provide six test plans to verify the logic within the program. o Plan 1 must display the proper information for employee #1 with overtime pay. o Plan 2 must display the proper information for employee #1 with no overtime pay. o Plans 3-6 are duplicates of plan 1 and 2 but for the other employees. Program Requirements: o Define a base class to use for the entire program. o The class holds the function calls and the variables related to the overtime pay calculations. o Define one object per employee. Note there will be three employees. o Your program must take the objects created and implement calculations based on total salaries, total hours, and the total number of overtime hours. See the Employee Summary Data section of the sample output. Logic Steps to Complete Your Program: o Define your base class. o Define your objects from your base class. o Prompt for user input, updating your object classes for all three users. o Implement your overtime pay calculations. o Display overtime or regular time pay calculations. See the sample output below. o Implement object calculations by summarizing your employee objects and display the summary information in the example below. And this is the code: // Final_Project.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include <string> #include <iomanip> using namespace std; // //CLASS DECLARATION SECTION // class CEmployee { public: void ImplementCalculations(string EmployeeName, double hours, double wage); void DisplayEmployInformation(void); void Addsomethingup (CEmployee, CEmployee, CEmployee); string EmployeeName ; int hours ; int overtime_hours ; int iTotal_hours ; int iTotal_OvertimeHours ; float wage ; float basepay ; float overtime_pay ; float overtime_extra ; float iTotal_salaries ; float iIndividualSalary ; }; int main() { system("cls"); cout << "Welcome to the Employee Pay Center"; /* Use this section to define your objects. You will have one object per employee. You have only three employees. The format is your class name and your object name. */ std::cout << "Please enter Employee's Name: "; std::cin >> EmployeeName; std::cout << "Please enter Total Hours for (EmployeeName): "; std::cin >> hours; std::cout << "Please enter Base Pay for(EmployeeName): "; std::cin >> basepay; /* Here you will prompt for the first employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Example of Prompts Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the second employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the third employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will implement a function call to implement the employ calcuations for each object defined above. You will do this for each of the three employees or objects. The format for this step is the following: [(object name.function name(objectname.name, objectname.hours, objectname.wage)] ; */ /* This section you will send all three objects to a function that will add up the the following information: - Total Employee Salaries - Total Employee Hours - Total Overtime Hours The format for this function is the following: - Define a new object. - Implement function call [objectname.functionname(object name 1, object name 2, object name 3)] /* } //End of Main Function void CEmployee::ImplementCalculations (string EmployeeName, double hours, double wage){ //Initialize overtime variables overtime_hours=0; overtime_pay=0; overtime_extra=0; if (hours > 40) { /* This section is for the basic calculations for calculating overtime pay. - base pay = 40 hours times the hourly wage - overtime hours = hours worked – 40 - overtime pay = hourly wage * 1.5 - overtime extra pay over 40 = overtime hours * overtime pay - salary = overtime money over 40 hours + your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // if (hours > 40) else { /* Here you are going to calculate the hours less than 40 hours. - Your base pay is = your hours worked times your wage - Salary = your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // End of the else } //End of Primary Function void CEmployee::DisplayEmployInformation(); { // This function displays all the employee output information. /* This is your cout statements to display the employee information: Employee Name ............. = Base Pay .................. = Hours in Overtime ......... = Overtime Pay Amount........ = Total Pay ................. = */ } // END OF Display Employee Information void CEmployee::Addsomethingup (CEmployee Employ1, CEmployee Employ2) { // Adds two objects of class Employee passed as // function arguments and saves them as the calling object's data member values. /* Add the total hours for objects 1, 2, and 3. Add the salaries for each object. Add the total overtime hours. */ /* Then display the information below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% EMPLOYEE SUMMARY DATA%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Total Employee Salaries ..... = 576.43 %%%% Total Employee Hours ........ = 108 %%%% Total Overtime Hours......... = 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% */ } // End of function

    Read the article

  • InternalsVisibleTo attribute and security vulnerability

    - by Sergey Litvinov
    I found one issue with InternalsVisibleTo attribute usage. The idea of InternalsVisibleTo attribute to allow some other assemblies to use internal classes\methods of this assembly. To make it work you need sign your assemblies. So, if other assemblies isn't specified in main assembly and if they have incorrect public key, then they can't use Internal members. But the issue in Reflection Emit type generation. For example, we have CorpLibrary1 assembly and it has such class: public class TestApi { internal virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } public void DoApiTest() { // some internal logic // ... // call internal method DoSomething(); } } This assembly is marked with such attribute to allow another CorpLibrary2 to make inheritor for that TestAPI and override behaviour of DoSomething method. [assembly: InternalsVisibleTo("CorpLibrary2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100434D9C5E1F9055BF7970B0C106AAA447271ECE0F8FC56F6AF3A906353F0B848A8346DC13C42A6530B4ED2E6CB8A1E56278E664E61C0D633A6F58643A7B8448CB0B15E31218FB8FE17F63906D3BF7E20B9D1A9F7B1C8CD11877C0AF079D454C21F24D5A85A8765395E5CC5252F0BE85CFEB65896EC69FCC75201E09795AAA07D0")] The issue is that I'm able to override this internal DoSomething method and break class logic. My steps to do it: Generate new assembly in runtime via AssemblyBuilder Get AssemblyName from CorpLibrary1 and copy PublikKey to new assembly Generate new assembly that will inherit TestApi class As PublicKey and name of generated assembly is the same as in InternalsVisibleTo, then we can generate new DoSomething method that will override internal method in TestAPI assembly Then we have another assembly that isn't related to this CorpLibrary1 and can't use internal members. We have such test code in it: class Program { static void Main(string[] args) { var builder = new FakeBuilder(InjectBadCode, "DoSomething", true); TestApi fakeType = builder.CreateFake(); fakeType.DoApiTest(); // it will display: // Inject bad code // Base DoSomething Console.ReadLine(); } public static void InjectBadCode() { Console.WriteLine("Inject bad code"); } } And this FakeBuilder class has such code: /// /// Builder that will generate inheritor for specified assembly and will overload specified internal virtual method /// /// Target type public class FakeBuilder { private readonly Action _callback; private readonly Type _targetType; private readonly string _targetMethodName; private readonly string _slotName; private readonly bool _callBaseMethod; public FakeBuilder(Action callback, string targetMethodName, bool callBaseMethod) { int randomId = new Random((int)DateTime.Now.Ticks).Next(); _slotName = string.Format("FakeSlot_{0}", randomId); _callback = callback; _targetType = typeof(TFakeType); _targetMethodName = targetMethodName; _callBaseMethod = callBaseMethod; } public TFakeType CreateFake() { // as CorpLibrary1 can't use code from unreferences assemblies, we need to store this Action somewhere. // And Thread is not bad place for that. It's not the best place as it won't work in multithread application, but it's just a sample LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot(_slotName); Thread.SetData(slot, _callback); // then we generate new assembly with the same nameand public key as target assembly trusts by InternalsVisibleTo attribute var newTypeName = _targetType.Name + "Fake"; var targetAssembly = Assembly.GetAssembly(_targetType); AssemblyName an = new AssemblyName(); an.Name = GetFakeAssemblyName(targetAssembly); // copying public key to new generated assembly var assemblyName = targetAssembly.GetName(); an.SetPublicKey(assemblyName.GetPublicKey()); an.SetPublicKeyToken(assemblyName.GetPublicKeyToken()); AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(an, AssemblyBuilderAccess.RunAndSave); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyBuilder.GetName().Name, true); // create inheritor for specified type TypeBuilder typeBuilder = moduleBuilder.DefineType(newTypeName, TypeAttributes.Public | TypeAttributes.Class, _targetType); // LambdaExpression.CompileToMethod can be used only with static methods, so we need to create another method that will call our Inject method // we can do the same via ILGenerator, but expression trees are more easy to use MethodInfo methodInfo = CreateMethodInfo(moduleBuilder); MethodBuilder methodBuilder = typeBuilder.DefineMethod(_targetMethodName, MethodAttributes.Public | MethodAttributes.Virtual); ILGenerator ilGenerator = methodBuilder.GetILGenerator(); // call our static method that will call inject method ilGenerator.EmitCall(OpCodes.Call, methodInfo, null); // in case if we need, then we put call to base method if (_callBaseMethod) { var baseMethodInfo = _targetType.GetMethod(_targetMethodName, BindingFlags.NonPublic | BindingFlags.Instance); // place this to stack ilGenerator.Emit(OpCodes.Ldarg_0); // call the base method ilGenerator.EmitCall(OpCodes.Call, baseMethodInfo, new Type[0]); // return ilGenerator.Emit(OpCodes.Ret); } // generate type, create it and return to caller Type cheatType = typeBuilder.CreateType(); object type = Activator.CreateInstance(cheatType); return (TFakeType)type; } /// /// Get name of assembly from InternalsVisibleTo AssemblyName /// private static string GetFakeAssemblyName(Assembly assembly) { var internalsVisibleAttr = assembly.GetCustomAttributes(typeof(InternalsVisibleToAttribute), true).FirstOrDefault() as InternalsVisibleToAttribute; if (internalsVisibleAttr == null) { throw new InvalidOperationException("Assembly hasn't InternalVisibleTo attribute"); } var ind = internalsVisibleAttr.AssemblyName.IndexOf(","); var name = internalsVisibleAttr.AssemblyName.Substring(0, ind); return name; } /// /// Generate such code: /// ((Action)Thread.GetData(Thread.GetNamedDataSlot(_slotName))).Invoke(); /// private LambdaExpression MakeStaticExpressionMethod() { var allocateMethod = typeof(Thread).GetMethod("GetNamedDataSlot", BindingFlags.Static | BindingFlags.Public); var getDataMethod = typeof(Thread).GetMethod("GetData", BindingFlags.Static | BindingFlags.Public); var call = Expression.Call(allocateMethod, Expression.Constant(_slotName)); var getCall = Expression.Call(getDataMethod, call); var convCall = Expression.Convert(getCall, typeof(Action)); var invokExpr = Expression.Invoke(convCall); var lambda = Expression.Lambda(invokExpr); return lambda; } /// /// Generate static class with one static function that will execute Action from Thread NamedDataSlot /// private MethodInfo CreateMethodInfo(ModuleBuilder moduleBuilder) { var methodName = "_StaticTestMethod_" + _slotName; var className = "_StaticClass_" + _slotName; TypeBuilder typeBuilder = moduleBuilder.DefineType(className, TypeAttributes.Public | TypeAttributes.Class); MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, MethodAttributes.Static | MethodAttributes.Public); LambdaExpression expression = MakeStaticExpressionMethod(); expression.CompileToMethod(methodBuilder); var type = typeBuilder.CreateType(); return type.GetMethod(methodName, BindingFlags.Static | BindingFlags.Public); } } remarks about sample: as we need to execute code from another assembly, CorpLibrary1 hasn't access to it, so we need to store this delegate somewhere. Just for testing I stored it in Thread NamedDataSlot. It won't work in multithreaded applications, but it's just a sample. I know that we use Reflection to get private\internal members of any class, but within reflection we can't override them. But this issue is allows anyone to override internal class\method if that assembly has InternalsVisibleTo attribute. I tested it on .Net 3.5\4 and it works for both of them. How does it possible to just copy PublicKey without private key and use it in runtime? The whole sample can be found there - https://github.com/sergey-litvinov/Tests_InternalsVisibleTo UPDATE1: That test code in Program and FakeBuilder classes hasn't access to key.sn file and that library isn't signed, so it hasn't public key at all. It just copying it from CorpLibrary1 by using Reflection.Emit

    Read the article

  • Ruby on Rails: url_for :back leads to NoMethodError for back_url

    - by Platinum Azure
    Hi all, I'm trying to use url_for(:back) to create a redirect leading back to a previous page upon a user's logging in. I've had it working successfully for when the user just goes to the login page on his or her own. However, when the user is redirected to the login page due to accessing a page requiring that the user be authenticated, the redirect sends the user back to the page before the one s/he had tried to access with insufficient permissions. I'm trying to modify my login controller action to deal with the redirect properly. My plan is to have a query string parameter "redirect" that is used when a forced redirect occurs. In the controller, if that parameter exists that URL is used; otherwise, url_for(:back) is used, or if that doesn't work (due to lack of HTTP_REFERER), then the user is redirected to the site's home page. Here is the code snippet which is supposed to implement this logic: if params[:redirect] @url = params[:redirect] else @url = url_for :back @url ||= url_for :controller => "home", :action => "index" end The error I get is: NoMethodError in UsersController#login undefined method `back_url' for # RAILS_ROOT: [obscured] Application Trace | Framework Trace | Full Trace vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `__send__' vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `polymorphic_url' vendor/rails/actionpack/lib/action_controller/base.rb:628:in `url_for' app/controllers/users_controller.rb:16:in `login' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:76:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `synchronize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:282:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/command.rb:212:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `__send__' vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `polymorphic_url' vendor/rails/actionpack/lib/action_controller/base.rb:628:in `url_for' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `perform_action_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:617:in `call_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /usr/lib/ruby/1.8/benchmark.rb:293:in `measure' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' vendor/rails/actionpack/lib/action_controller/rescue.rb:136:in `perform_action_without_caching' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:13:in `perform_action' vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' vendor/rails/activerecord/lib/active_record/query_cache.rb:8:in `cache' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:12:in `perform_action' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `process_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:606:in `process_without_session_management_support' vendor/rails/actionpack/lib/action_controller/session_management.rb:134:in `process' vendor/rails/actionpack/lib/action_controller/base.rb:392:in `process' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:184:in `handle_request' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:112:in `dispatch_unlocked' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:125:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `synchronize' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:134:in `dispatch_cgi' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:41:in `dispatch' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load_without_new_constant_marking' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/railties/lib/commands/servers/mongrel.rb:64 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/railties/lib/commands/server.rb:49 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `__send__' vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `polymorphic_url' vendor/rails/actionpack/lib/action_controller/base.rb:628:in `url_for' app/controllers/users_controller.rb:16:in `login' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `perform_action_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:617:in `call_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /usr/lib/ruby/1.8/benchmark.rb:293:in `measure' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' vendor/rails/actionpack/lib/action_controller/rescue.rb:136:in `perform_action_without_caching' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:13:in `perform_action' vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' vendor/rails/activerecord/lib/active_record/query_cache.rb:8:in `cache' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:12:in `perform_action' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `process_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:606:in `process_without_session_management_support' vendor/rails/actionpack/lib/action_controller/session_management.rb:134:in `process' vendor/rails/actionpack/lib/action_controller/base.rb:392:in `process' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:184:in `handle_request' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:112:in `dispatch_unlocked' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:125:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `synchronize' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:134:in `dispatch_cgi' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:41:in `dispatch' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:76:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `synchronize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:282:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/command.rb:212:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load_without_new_constant_marking' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/railties/lib/commands/servers/mongrel.rb:64 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/railties/lib/commands/server.rb:49 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 Request Parameters: None Show session dump --- :user: :csrf_id: 2927cca61bbbe97218362b5bcdb74c0f flash: !map:ActionController::Flash::FlashHash {} Response Headers: {"Content-Type"="", "cookie"=[], "Cache-Control"="no-cache"} Bear in mind that I had it working earlier-- url_for(:back) knew how to operate properly before I added this logic. Thanks in advance for any ideas!

    Read the article

  • Win7 playback of dvr-ms files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-ms format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-ms (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-ms file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-ms files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-ms looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for WMP but GraphStudio won't show it. It looks like WMP and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • free open-source linux screenshot & ocr tool

    - by Gryllida
    I'm looking for a tool which would be able to capture a screen region, pass it to OCR and put the result into clipboard. "import ppm:- | gocr -i - | xclip -selection c" works, but gocr is unreliable: simple text on a webpage has errors. It is a clear font but the OCR tool always misses "r" and replaces it with underscore. "import ppm:- | ocrad -i - | xclip -selection c" says "ocrad: maxval 255 in ppm "P6" file." tesseract needs an image file and does not accept piping input to it. xfce4-screenshooter does not do OCR. ABBYY Screenshot Reader is proprietary. tessnet2 is freeware running on a proprietary platform. Google Docs can OCR screenshots in a batch. But my data is confidential and better not put online. Graphical interface solutions would be acceptable for this question, too. There is a number of existing SuperUser questions about OCR. They fall in several categories. Questions just about OCR without the "screenshot taking" part. Open Source OCR for linux Free OCR for Arabic text Looking for recommendations on OCR problem - tabular numeric data Which has better OCR applications: Ubuntu, or Mac/iPad, or Windows? How can I preform OCR from the command line? OCR solution on linux machine from command line (duplicate) Free OCR software OCR for Sanskrit ( OR devanagari) Copy image and paste to OCR (windows) File processing OCR instead of screenshot. Online OCR website for processing an entire pdf file at one time? Practical OCR solution for converting a large book to a digital format? How to extract text with OCR from a PDF on Linux? Batch-OCR many PDFs OCR Image based PDF Copy image and paste to OCR Extract OCR text from Evernote OCR in Word 2013 Replace (OCR) garbled text in PDF? Process files prior to running OCR. How can I make OCR recognize my documents' text better? Tesseract OCR recognition bilingual document. mistakes tolerance level setup OCR for low quality images How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would be the best for screenshots? OCR training. Training Tesseract-OCR for english language fonts None of them answer this question.

    Read the article

  • New Computer Build Questions

    - by MJ
    I'm in the process of gathering parts and specs for a new machine. I wear many hats, so the machine needs to do a lot. I need at least 2 monitor support, if not three. I also play many online MMOs (wow, aion, war hammer, etc), along with some freelance programming projects. I already have a case which is very large, so it will fit anything. I have 2 other SATA HDs. They are more for storage and basic programs. I feel that the best improvement could be done with a solid state HD, true or not? I'm more of a software/programming guy, so ANY input at all on improving this system build would be appreciated. I have a few questions with this list. AMD or Intel? I don't know enough about either to choose what would best fit me. Thanks! **EDIT: Thanks for the input everyone! Here are some answers: I do a lot of programming and gaming, so I do need things for both. The newer video card covers the gaming aspect, as well as allowing me to have many monitors. (hopefully upgrade to dual 30' or more) I don't need any additional HDs at this time. I have a SATA 160g and 120g from my previous computer, and a NAS system with over 2TB of storage on the homenetwork. I just wanted a fast HD for OS/programs/games. With the memory. I have used G.SKILL before in 2 system builds. It's done excellent for me in them. Very stable. **EDIT2: Made some additional changes. Lowered the power supply down to 750, which saves me more $$. Also changed the SSD to 2 WD 650G HDs. Thinking of doing a CPU upgrade to the 3.4GHZ AMD Phenom II X4 965 Black Edition Deneb 3.4GHz System Specs - Budget:$1500 CPU: AMD Phenom II X4 955 Black Edition Deneb 3.2GHz MB: GIGABYTE GA-MA790GPT-UD3H AM3 AMD 790GX HDMI ATX Memory: G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 1066 Video: DIAMOND 5870PE51G Radeon HD 5870 (Cypress XT) 1GB 256-bit GD Power Supply: XCLIO GREATPOWER 1000W ATX12V SLI Ready CrossFire Ready HD:Intel X25-M Mainstream SSDSA2MH080G2C1 2.5" 80GB SATA II MLC Changes: Power Supply: CORSAIR CMPSU-750TX 750W ATX12V / EPS12V HD: 2x Western Digital Caviar Blue WD6400AAKS 640GB CPU: AMD Phenom II X4 965 Black Edition Deneb 3.4GHz

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • Windows 7 playback of dvr-Microsoft files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-Microsoft format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-Microsoft (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-Microsoft file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-Microsoft files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-Microsoft looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for Windows Media Player but GraphStudio won't show it. It looks like Windows Media Player and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Connecting a 2560x1440 display to a laptop?

    - by tjollans
    Having read Jeff Atwood's blog post on Korean 27" IPS LCDs, I've been wondering to what extent these are useful in a notebook + large display situation. I own a Lenovo Thinkpad Edge E320 with 2nd gen. integrated Intel graphics. According to the spec from Intel, this should support HDMI version 1.4, and, using DisplayPort, resolutions up to 2560x1600. HDMI version 1.4 supports resolutions up to 4096×2160, however, according to c't (German), the HDMI interface used with Intel chips only supports 1920x1200. The same goes for the DVI output - dual-link DVI-D, apparently, is not supported by Intel. It would appear that my laptop cannot digitally drive this kind of resolution. Now what about other laptops? According to the article in c't above, AMD's integrated graphics chips have the same limitation as Intel's. NVIDIA graphics cards, apparently, only offer resolutions up to 1900x1200 over HDMI out of the box, but it's possible, when using Linux at least, to trick the driver into enabling higher resolutions. Is this still true? What's the situation on Windows and OSX? I found no information on whether discrete AMD chips support ultra-high resolutions over HDMI. Owners of laptops with (Mini) DisplayPort / Thunderbolt won't have any issues with displays this large, but if you're planning to go for a display with dual-link DVI-D input only (like the Korean ones), you're going to need an adapter, which will set you back something like €70-€100 (since the protocols are incompatible). The big question mark in this equation is VGA: a lot of laptops have it, and I don't see any reason to think this resolution is not supported by the hardware (an oft-quoted figure appears to be 2048x1536@75Hz, so 2560x1440@60Hz should be possible, right?), but are the drivers likely to cause problems? Perhaps more critically, you'd need a VGA to dual-link DVI-D adapter that converts analog to digital signals. Do these exist? How good are they? How expensive are they? Is there a performance penalty involved? Please correct me if I'm wrong on any points. In summary, what are the requirements on a laptop to drive an external LCD at 2560x1440, in particular one that supports dual-link DVI-D only, and what tools and adapters can be used to lower the bar?

    Read the article

  • Get 5.1 surround sound from computer through a VCR config?

    - by Wedding Nails
    I'm posting to see if my idea of this setup is right and can be done. I currently have the following "equipment": a JVC VCR -quite old-, which has built in surround sound (aka it has several speaker outputs, which I believe is 5.1 and are connected to several speakers that are in every corner of the room), a computer with SPDIF optical output and a new flat screen TV (with built in HDMI). I want the computer to take advantage of the VCR's surround system (all the speakers in the room) in order to play mainly music and video always with all the speakers (5.1) and with the maximum sound quality. Currently, the computer plays sound only through the front speaker (I connect one output to the on board pc audio input) and the quality is really bad. As a side note, the computer video runs with S-video (old school), and the picture quality as you would imagine, is really bad with the new big LCD screen. My main goals are: to upgrade the picture with a new video card which would support HDMI (my tv has HDMI). to buy a SPDIF optical cable, connect one end to the VCR SPDIF input and the other end to the PC output This is theoretically what I've researched so far, and I came out with several questions: in this case, with the SPDIF cable connected, and all the configurations done in windows allowing the 5.1, will I get every content I play "converted" or played through all of my speakers? (I read this forum post). I already know that in order for this setup to play from all the speakers, the content/audio source has to be 5.1. but my question is, if there is a way to play from all of the speakers no matter what type of content I'm playing (that's why I said conversion there) I already know that HDMI cables carry digital sound. Is there a way I can only use said HDMI cord to the tv, and get sound through the VCR? (I'm not too sure about this, I would have to disable the TVs speakers and use the VCR surround as default, but I have no clue wether this can be done or not). Update: The ultimate question is, do I really have to rely on "sound virtualization" technology to get sound from all the speakers, no matter what content I play? (do I require a newer sound card, like a creative soundblaster with said technology?) Thanks!

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >