Search Results

Search found 86615 results on 3465 pages for 'page viewer web part'.

Page 135/3465 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • The gestures of Windows 8 (Consumer preview): part 2, More about Search

    - by Laurent Bugnion
    This is part 2 of a multipart blog post about the gestures and shortcuts in Windows 8 consumer preview. Part 1 can be found here! More about the Search charm In the first installment of this series, we talked about the charms and mentioned a few gestures to display the Search charm. Search is a very central and powerful feature in Windows 8, and allows you to search in Apps, Settings, Files and within Metro applications that support the Search contract. There are a few cool features around the Search, and especially the applications associated to it. I already mentioned the keyboard shortcuts you can use: Win-C shows the Charms bar (same as swiping from the right bevel towards the center of the screen). Win-Q open the Search fly out with Apps preselected. Win-W open the Search fly out with Settings preselected. Win-F open the Search fly out with Files preselected. Searching in Metro apps In addition to these three search domains, you can also search a Metro app, as long as it supports the Search contract (check this Build video to learn more about the Search contract). These apps show up in the Search flyout as shown here: Notice the list of apps below the Files button? That’s what we are talking about. First of all, the list order changes when you search in some applications. For instance, in the image above, I had used the Store with the Search charm. This is why the store shows up as the first app. I am not 100% what algorithm is used here (sorting according to number of searches is my guess), but try it out and try to figure it out Applications that have never been searched are sorted alphabetically. Does it mean we will see cool app names like ___AAA_MyCoolApp? I certainly hope not!! Pinning You can also pin often used apps to the Search flyout. To pin an app with the mouse, right click on it in the Search flyout and select Pin from the context menu. With the keyboard, use the arrow keys to go down to the selected app, and then open the context menu. With the finger, simply tap and hold until you see a semi transparent rectangle indicating that the context menu will be shown, then release. The context menu opens up and you can select Pin. Pin context menu Pinned apps Unpinning, Hiding Using the same technique as for pinning here above, you can also unpin a pinned application. Finally, you can also choose to hide an app from the Search flyout altogether. This is a convenient way to clean up and make it easy to find stuff. Note: At this point, I am not sure how to re-add a hidden app to the Search flyout. If anyone knows, please mention it in the comments, thanks! Reordering You can also reorder pinned apps. To do this, with the finger, tap, hold and pull the app to the side, then pull it vertically to reorder it. You can also reorder with the mouse, simply by clicking on an app and pulling it vertically to the place you want to put it. I don’t think there is a way to do that with the keyboard though. That’s it for now More gestures will follow in a next installment! Have fun with Windows 8   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery – Part 1

    - by Bob Zurek
    Information Discovery, a core capability of Oracle Endeca Information Discovery, enables business users to rapidly search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. One of the key capabilities, among many, that differentiate our solution from others in the Information Discovery market is our deep support for search across this growing amount of varied big data. Our method and approach is very different than classic simple keyword search that is found in may information discovery solutions. In this first part of a series on the topic of search, I will walk you through many of the key capabilities that go beyond the simple search box that you might experience in products where search was clearly an afterthought or attempt to catch up to our core capabilities in this area. Lets explore. The core data management solution of Oracle Endeca Information Discovery is the Endeca Server, a hybrid search-analytical database that his highly scalable and column-oriented in nature. We will talk in more technical detail about the capabilities of the Endeca Server in future blog posts as this post is intended to give you a feel for the deep search capabilities that are an integral part of the Endeca Server. The Endeca Server provides best-of-breed search features aw well as a new class of features that are the first to be designed around the requirement to bridge structured, semi-structured and unstructured big data. Some of the key features of search include type a heads, automatic alphanumeric spell corrections, positional search, Booleans, wildcarding, natural language, and category search and query classification dialogs. This is just a subset of the advanced search capabilities found in Oracle Endeca Information Discovery. Search is an important feature that makes it possible for business users to explore on the diverse data sets the Endeca Server can hold at any one time. The search capabilities in the Endeca server differ from other Information Discovery products with simple “search boxes” in the following ways: The Endeca Server Supports Exploratory Search.  Enterprise data frequently requires the user to explore content through an ad hoc dialog, with guidance that helps them succeed. This has implications for how to design search features. Traditional search doesn’t assume a dialog, and so it uses relevance ranking to get its best guess to the top of the results list. It calculates many relevance factors for each query, like word frequency, distance, and meaning, and then reduces those many factors to a single score based on a proprietary “black box” formula. But how can a business users, searching, act on the information that the document is say only 38.1% relevant? In contrast, exploratory search gives users the opportunity to clarify what is relevant to them through refinements and summaries. This approach has received consumer endorsement through popular ecommerce sites where guided navigation across a broad range of products has helped consumers better discover choices that meet their, sometimes undetermined requirements. This same model exists in Oracle Endeca Information Discovery. In fact, the Endeca Server powers many of the most popular e-commerce sites in the world. The Endeca Server Supports Cascading Relevance. Traditional approaches of search reduce many relevance weights to a single score. This means that if a result with a good title match gets a similar score to one with an exact phrase match, they’ll appear next to each other in a list. But a user can’t deduce from their score why each got it’s ranking, even though that information could be valuable. Oracle Endeca Information Discovery takes a different approach. The Endeca Server stratifies results by a primary relevance strategy, and then breaks ties within a strata by ordering them with a secondary strategy, and so on. Application managers get the explicit means to compose these strategies based on their knowledge of their own domain. This approach gives both business users and managers a deterministic way to set and understand relevance. Now that you have an understanding of two of the core search capabilities in Oracle Endeca Information Discovery, our next blog post on this topic will discuss more advanced features including set search, second-order relevance as well as an understanding of faceted search mechanisms that include queries and filters.  

    Read the article

  • Notes - Part II - Play with JavaFX

    - by Silviu Turuga
    Open the project from last lesson Double click on NotesUI.fmxl, this will open the JavaFX Scene Builder On the left side you have a area called Hierarchy, from there press Del or Shift+Backspace on Mac to delete the Button and the Label. You'll receive a warning, that some components have been assigned an fx:id, click Delete as we don't need them anymore. Resize the AnchorPane to have enough room for our design, eg. 820x550px From the top left pick the Container called Accordion and drag over the AnchorPane design Chose then from Controls a List View and drag inside the Accordion. You'll notice that by default the Accordion has 2 TitledPane, and you can switch between them by clicking on their name. I'll let you the pleasure to do the rest in order to get the following result  Here is the list of objects used Save it and then return to NetBeans Run the application and it should be run without any issue. If you click on buttons they all are functional, but nothing happens as we didn't link them with any action. We'll see this in the next episode. Now, let's play a little bit with the application and try to resize it… Have you notice the behavior? If the form is too small, some objects aren't visible, if it is too large there is too much space . That's for sure something that your users won't like and you as a programmer have to care about this. From NetBeans double click NotesUI.fmxl so to return back to JavaFX Scene Builder Select the TextField from bottom left of Notes, the one where I put the text Category and then from the right part of JavaFX Scene Builder you'll notice a panel called Inspector. Chose Layout and then click on the dotted lines from left and bottom of the square, like you see in the below image This will make the textfield to have always the same distance from left and bottom no matter the size of the form. Save and run the application. Note that whenever the form is changing the Height, the Category TextField has the same distance from the bottom. Select Accordion and do the same steps but also check the top dotted line, because we want the Accordion to have the same height as the main form has. I'll let you the pleasure to do the same for the rest of components. It's very important to design an application that can be resize by user and in the same time, all the buttons are on place. Last step is to make sure our application is not getting smaller then a certain size, as this will hide parts of our layout. So select the AnchorPane and from Inspector go to Layout and note down the Width and Height. Go back to NetBeans and open the file Main.java and add the following code just after stage.setScene(scene); (around line 26) stage.setMinWidth(820); stage.setMinHeight(550); Use your own width and height. This will prevent user to reduce the width or height of your application to a value that will hide parts of your layout. So now you should have done most of the design part and next time we'll see how can we enter some data into our newly created application… Note: in case you miss something, here are the source files of the project till this point. 

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • web design PSD to html -> more direct ways?

    - by Assembler
    At work I see one colleague designing a site in Photoshop/Fireworks, I see another taking this data, slicing it up and using Dreamweaver to rebuild the same from scratch. It seems like too much mucking around! I know that Photoshop can output a tables based HTML, and Fireworks will create divs with absolute positioning; neither appear to be very helpful. Admittedly, I haven't tried much of (DW/FW) (CS4/CS3) since becoming a programmer, so I don't know if new versions are addressing this work flow issue, but are we still double handling things? Can we attach some sort of layout metadata (this is a rollover button, this will be a SWF, this will be text, this logo will hide "xyz" <h1> text etc) to slices to aid in layout generation? are there some secret tools which assist in this conversion process? Or are we still restricted to doing things by hand? The frustration continues when said hand built page needs to be reworked again to fit Smarty Templates/Wordpress/generic CMS. I acknowledge that designers need to be free of systems to be able to do whatever, but most conventional sites have: a header with navigation a sidebar with more links the main content part maybe another sidebar a footer Given the similarity of a lot of components, shouldn't there be a more systematic approach to going from sliced designs to functional HTML? Or am I over-simplifying things? -edit- Mmmmm.... I suppose I will accept an answer, but they weren't really what I was looking for. It just seems like designing the DOM is a bit of holy grail ("It's only a model!"), and maybe with all the "groovy" things you can do with HTML and Javascript, it would be mighty hard work, but with a set of constraints (that 960 stuff looks interesting), some well designed reset style sheets and a bit of... fairy dust? we should be able to improve the work flow. Photoshop's tables by themselves are pretty much useless, I agree, but surely we can take this data, and then select a group of cells and say "right, this is a text div, overflow:auto" or "these cells are an image block, style it with the same height/width as the selected area". Admittedly here at work there are other elephants in the room that need to make their formal introductions to management, but some parts of the designpage workflow seem... uneducated at best.

    Read the article

  • Deselect dates in Web Calendar c#

    - by yomismo
    Hello, I'm trying to select and de-select dates on a C# Web Calendar control. The problem I have is that I can select or deselect dates except when there is only a single date selected. Clicking on it does not trigger the selection changed event, so Ineed to do something on the dayrender event but I'm not sure what or how. Any ideas? TIA Code so far: public static List<DateTime> list = new List<DateTime>(); protected void Calendar1_DayRender(object sender, DayRenderEventArgs e) { if (e.Day.IsSelected == true) { list.Add(e.Day.Date); } Session["SelectedDates"] = list; } protected void Calendar1_SelectionChanged(object sender, EventArgs e) { DateTime selection = Calendar1.SelectedDate; if (Session["SelectedDates"] != null) { List<DateTime> newList = (List<DateTime>)Session["SelectedDates"]; foreach (DateTime dt in newList) { Calendar1.SelectedDates.Add(dt); } if (searchdate(selection, newList)) { Calendar1.SelectedDates.Remove(selection); } list.Clear(); } } public bool searchdate(DateTime date, List<DateTime> dates) { var query = from o in dates where o.Date == date select o; if (query.ToList().Count == 0) { return false; } else { return true; } }

    Read the article

  • Failed sending bytes array to JAX-WS web service on Axis

    - by user304309
    Hi I have made a small example to show my problem. Here is my web-service: package service; import javax.jws.WebMethod; import javax.jws.WebService; @WebService public class BytesService { @WebMethod public String redirectString(String string){ return string+" - is what you sended"; } @WebMethod public byte[] redirectBytes(byte[] bytes) { System.out.println("### redirectBytes"); System.out.println("### bytes lenght:" + bytes.length); System.out.println("### message" + new String(bytes)); return bytes; } @WebMethod public byte[] genBytes() { byte[] bytes = "Hello".getBytes(); return bytes; } } I pack it in jar file and store in "axis2-1.5.1/repository/servicejars" folder. Then I generate client Proxy using Eclipse for EE default utils. And use it in my code in the following way: BytesService service = new BytesServiceProxy(); System.out.println("Redirect string"); System.out.println(service.redirectString("Hello")); System.out.println("Redirect bytes"); byte[] param = { (byte)21, (byte)22, (byte)23 }; System.out.println(param.length); param = service.redirectBytes(param); System.out.println(param.length); System.out.println("Gen bytes"); param = service.genBytes(); System.out.println(param.length); And here is what my client prints: Redirect string Hello - is what you sended Redirect bytes 3 0 Gen bytes 5 And on server I have: ### redirectBytes ### bytes lenght:0 ### message So byte array can normally be transfered from service, but is not accepted from the client. And it works fine with strings. Now I use Base64Encoder, but I dislike this solution.

    Read the article

  • Parsing with BeautifulSoup, error message TypeError: coercing to Unicode: need string or buffer, NoneType found

    - by Samsun Knight
    so I'm trying to scrape an Amazon page for data, and I'm getting an error when I try to parse for where the seller is located. Here's my code: #getting the html request = urllib2.Request('http://www.amazon.com/gp/offer-listing/0393934241/') opener = urllib2.build_opener() #hiding that I'm a webscraper request.add_header('User-Agent', 'Mozilla/5 (Solaris 10) Gecko') #opening it up, putting into soup form html = opener.open(request).read() soup = BeautifulSoup(html, "html5lib") #parsing for the seller info sellers = soup.findAll('div', {'class' : 'a-row a-spacing-medium olpOffer'}) for eachseller in sellers: #parsing for price price = eachseller.find('span', {'class' : 'a-size-large a-color-price olpOfferPrice a-text-bold'}) #parsing for shipping costs shippingprice = eachseller.find('span' , {'class' : 'olpShippingPrice'}) #parsing for condition condition = eachseller.find('span', {'class' : 'a-size-medium'}) #parsing for seller name sellername = eachseller.find('b') #parsing for seller location location = eachseller.find('div', {'class' : 'olpAvailability'}) #printing it all out print "price, " + price.string + ", shipping price, " + shippingprice.string + ", condition," + condition.string + ", seller name, " + sellername.string + ", location, " + location.string I get the error message, pertaining to the 'print' command at the end, "TypeError: coercing to Unicode: need string or buffer, NoneType found" I know that it's coming from this line - location = eachseller.find('div', {'class' : 'olpAvailability'}) - because the code works fine without that line, and I know that I'm getting NoneType because the line isn't finding anything. Here's the html from the section I'm looking to parse: <*div class="olpAvailability"> In Stock. Ships from WI, United States. <*br/><*a href="/gp/aag/details/ref=olp_merch_ship_9/175-0430757-3801038?ie=UTF8&amp;asin=0393934241&amp;seller=A1W2IX7T37FAMZ&amp;sshmPath=shipping-rates#aag_shipping">Domestic shipping rates</a> and <*a href="/gp/aag/details/ref=olp_merch_return_9/175-0430757-3801038?ie=UTF8&amp;asin=0393934241&amp;seller=A1W2IX7T37FAMZ&amp;sshmPath=returns#aag_returns">return policy</a>. <*/div> (but without the stars - just making sure the HTML doesn't compile out of code form) I don't see what's the problem with the 'location' line of code, or why it's not pulling the data I want. Help?

    Read the article

  • Google appengine authentication on iPhone web app on the home screen

    - by Rakesh Pai
    I'm using Google appengine for developing an web application that is meant to be used on both the browser and iphone. I have purchased a domain name for this application, so that I have a pretty URL. I've used the User API for authentication. This works just fine on desktop browsers and iPhone Safari. The user could add the application to the home screen (by tapping the "+" at the bottom toolbar). However when that's done, it seems like the cookies set by Google are not in affect within this "application", and the user is effectively logged out. To make matters worse, when the user clicks on the login link (as generated by GAE), the app closes and opens safari to complete the login. Since the session is apparently not shared between the two, the login process is futile, and the "home-screen" version of the app continues to be logged out. It seems that the cookies are not shared between a "home-screen" app and Safari. It also seems that the "home-screen" app will only work within it's own domain, and any redirect to any other domain will open Safari. Any idea how I can go about fixing this?

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' failed when compiling web si

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it I get the following compiler error: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines, I would change the machine's .NET Security Configuration trust level to full for the local intranet (fix outlined here). I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • Building Web Application project using MSBuild from command line on 64-bit: missing targets file

    - by James Allen
    Building a solution containing a web application project using MSBuild from powershell like this: msbuild "/p:OutDir=$build_dir\" $solution_file Works fine for me on 32-bit, but on a 64-bit machine I am running into this error: error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. I am using Visual Studio 2008 and powershell v2. The problem has already been documented here and here. Basically on 64-bit install of VS, the Microsoft.WebApplication.targets needed by MSBuild is in the Program Files(x86) dir, not the Program Files dir, but MSBuild doesn't recognise this and so looks in the wrong place. The two solutions are not ideal: Manually copy the file on 64-bit from Program Files(x86) to Program Files. This is a poor solution - every dev will have to do this manually. Manually edit the csproj file so MSBuild looks in the right place. Again not ideal: I would rather not have to get everyone on 64bit to manually edit csproj files on every new project. e.g. <Import Project="$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)" Condition="Exists('$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)')" /> Ideally I want a way to tell MSBuild to import the target file form the right place from the command line but I can't work out how to do that. Any solutions?

    Read the article

  • How to check for server side update in iPhone application Web service request

    - by Nirmal
    Hello All.... I have kind of Listing application for iPhone application, which always calling a web service to my php server and fetching the data and displaying it onto iPhone screen. Now, the thing to consider in this scenario is, my iPhone application everytime requesting on the server and fetching the data. But now my requirement is like I want to replace following set of actions with above one : - Everytime when my application is launched in iPhone, it should check for the new data at the server`. - And if server replies "true" then only my iPhone application will made a request to fetch the data. - In case of "false", my iPhone application will display the data which is already cached in local phone memory. Now to implement this scenario at server side (which has php, mysql), I am planning with the following solution : Table : tblNewerData id newDataFlag == ============ 1 true Trigger : tgrUpdateNewData Above trigger will update the tblNewerData - newDataFlag field on Insert case of my main table. And every time my iPhone app will request for tblNewerData-newDataFlag field, and if it found true then only it will create new request, and if it founds false then the cached version of data will be displayed. So, I want to know that, is it the correct way to do so ? or else any other smart option available ? Thanks in advance.

    Read the article

  • sproutcore vs javascriptMVC for web app development

    - by swami
    Hi, I want to use a javascript framework with MVC for a complex web application (which will be one of a set of related apps and pages) for an intranet in a digital archives. I have been looking at SproutCore and JavascriptMVC. I want to choose one framework and stick with it. Does anybody know what the distinguishing features are when comparing these two? I want something that is simple, straightforward that I can customize/hack easily, and that doesn't get in my way too much, but that at the same time gives me a basis for keeping my code nicely organized, and event-driven. I also plan on using jquery substantially. I know sproutcore is backed by Apple, and looks like it is getting more popular by the day, and it has a nice green website :), whereas JavascriptMVC looks less professional, with less of a following and less momentum behind it. I've done the tutorials for both and I was impressed by SproutCore more (in the JMVC tutorial you don't really do anything substantial) - but somewhere in the back of my mind I feel that JMVC might just be better because it doesn't try and do too much - it just gives you MVC functionality based on a couple of jquery plugins, and you can use jquery for everything else, so its flexible. Whereas SproutCore seems to have more of its own API etc... which is also nice in a way... but then you're kind of stuck within that.... hmmm I'm confused :). Any thoughts would be much appreciated.

    Read the article

  • Data view web part throwing error

    - by Ashutosh Singh
    Hi I'm using a xslt based dataview webpart the steps i have taken to create a data view webpart is that 1. added a list view webpart on the page 2. Modified the toolbar property to show fulll toolbar 3. open the web page containing above list view webpart in sharepoint desginer and converted it to xslt based webpart (to make further changes in UI) 4. saved the page and previewed in browser in browser web part was throwing the error while i was able to see it properly in desginer witout any error the error maesseged shown in webpart was: ** Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Windows SharePoint Services-compatible HTML editor such as Microsoft Office SharePoint Designer. If the problem persists, contact your Web server administrator. ** the error message provided in sharepoint log file was : 05/12/2010 17:56:29.54 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Parts 89a1 Monitorable Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.WebPartPages.DataFormWebPart.ResolveParameterValuesToXsl(ArgumentClassWrapper argList) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() 05/12/2010 17:56:29.62 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Controls 88wy Medium SPDataSourceView.ExecuteSelect() - selectArguments: IsEmpty=True, MaximumRows=0, RetrieveTotalRowCount=False, SortExpression=, StartRowIndex=0, TotalRowCount=-1 05/12/2010 17:56:29.62 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Controls 88x2 Medium SPDataSourceView.ExecuteSelect() - formattedQuery = 1 05/12/2010 17:56:29.64 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Parts 89a1 Monitorable Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.WebPartPages.DataFormWebPart.ResolveParameterValuesToXsl(ArgumentClassWrapper argList) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform()

    Read the article

  • Unable to call RESTful web services methods

    - by Alessandro
    Hello, I'm trying to dive into the RESTful web services world and have started with the following template: [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] public class Test { // TODO: Implement the collection resource that will contain the SampleItem instances [WebGet(UriTemplate = ""), OperationContract] public List<SampleItem> GetCollection() { // TODO: Replace the current implementation to return a collection of SampleItem instances return new List<SampleItem>() {new SampleItem() {Id = 1, StringValue = "Hello"}}; } [WebInvoke(UriTemplate = "", Method = "POST"), OperationContract] public SampleItem Create(SampleItem instance) { // TODO: Add the new instance of SampleItem to the collection throw new NotImplementedException(); } [WebGet(UriTemplate = "{id}"), OperationContract] public SampleItem Get(string id) { // TODO: Return the instance of SampleItem with the given id throw new NotImplementedException(); } [WebInvoke(UriTemplate = "{id}", Method = "PUT"), OperationContract] public SampleItem Update(string id, SampleItem instance) { return new SampleItem { Id = 99, StringValue = "Done" }; } [WebInvoke(UriTemplate = "{id}", Method = "DELETE"), OperationContract] public void Delete(string id) { // TODO: Remove the instance of SampleItem with the given id from the collection throw new NotImplementedException(); } } I am able to perform the GET operation but I am unable to perform PUT, POST or DELETE requests. Can anyone explain me how to perform these operations and how to create the correct URLs? Best regards Alessandro

    Read the article

  • web service - client classes

    - by Noona
    The web service that I implemented is up and running, when I try to run the client I get the following error with regard to the classes that were generated using wsimport, Caused by: java.security.PrivilegedActionException: com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException: 4 counts of IllegalAnnotationExceptions Two classes have the same XML type name "{http://server.agency.hw2/}userJoined". Use @XmlType.name and @XmlType.namespace to assign different names to them. this problem is related to the following location: at hw2.chat.backend.main.generatedfromserver.UserJoined at public javax.xml.bind.JAXBElement hw2.chat.backend.main.generatedfromserver.ObjectFactory.createUserJoined(hw2.chat.backend.main.generatedfromserver.UserJoined) at hw2.chat.backend.main.generatedfromserver.ObjectFactory this problem is related to the following location: at ChatCompany.BackendChatServer.hw2.chat.backend.main.generatedfromserver.UserJoined Two classes have the same XML type name "{http://server.agency.hw2/}userJoinedResponse". Use @XmlType.name and @XmlType.namespace to assign different names to them. this problem is related to the following location: at hw2.chat.backend.main.generatedfromserver.UserJoinedResponse at public javax.xml.bind.JAXBElement hw2.chat.backend.main.generatedfromserver.ObjectFactory.createUserJoinedResponse(hw2.chat.backend.main.generatedfromserver.UserJoinedResponse) But I can't figure out what exactly is meant by the error. I am assuming I need to change something in annotations in this class as pointed out by the compiler: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "userJoinedResponse") public class UserJoinedResponse { } could someone please point out why there's a name collision and what annotations I need to change? thanks

    Read the article

  • emacs frustration with web development

    - by Tony Cruise
    I really liked flexibility of emacs but it is really annoying to make it work. I want to use it for web development html, css, javascript, php. I first tried emacs-starter-kit . It didn't included nXhtml. Also C-g key binding does not work (they call it starter kit but basic key command does not work). I think it is mapped for git control. That's a frustration for a beginner. Then I replaced emacs-starter-kit with nXhtml. At least C-g is working. But code completion sucks, M-tab does not work. I tried code completion from nXhtml menu with no success. Also NXhtml mode did'nt colorized my file if css is mixed with html. Isn't it recommended for mixed html, css,php files. So why it doesnt work?. Why Emacs folks do not aware of convention over configuration? Dam! ship it something works! Please help me before I am getting crazy. I use Ubuntu 10.04 and emacs-snaphot-gtk 23.1.50-1. Please guide step by step with your working dotfile url. Even I accept I am a dummy it really annoying and frustrating to use emacs.

    Read the article

  • Single Sign On for WebServices, WCS, WFS, WMS (Geoserver), etc.

    - by lajuette
    I'm trying to implement a Single Sign On (SSO) for a web application. Maybe you can help me find a proper solution, give me a direction or tell me, that solutions already exist. The scenario: A GeoExt (ExtJS for geodata/map based apps) webapp (JavaScript only) has a login form to let the user authenticate himself. The authentication is not implemented yet, so i'm flexible with respect to that. After authenticating to the main webapp the user indirectly accesses multiple 3rd-party services like WebServices, GeoServer WFS, Google Maps, etc. These services might require additional authentication like credentials or keys. We are (so far) not talking about additional login screens, just some kind of "machine to machine"-authentication. The main problem: I'm unable to modify the 3rd party services (e.g. Google) to add a SSO mechanism. I'd like to have a solution that allows the user to log in once to have access to all the services required. My first idea was a repository that stores all the required credentials or keys. The user logs in and can retrieve all required information to acces sthe other services. Does anybody know of existing implementations, papers, maybe implementations of such services? Other requirements: The communication between the JS application and the repository must be secure. The credentials must be stored in a secure manner. But the JS app must be able to use them to access the services (no chance to store a decryption key in a JS-app securely, eh? *g).

    Read the article

  • How to make Class for DataObjectAttribute visible in ObjectDataSourse in Web Application

    - by nCdy
    here is a class code : > [DataObjectAttribute] public class > Report { public this() {} > > > [DataObjectMethodAttribute(DataObjectMethodType.Select, > true)] public static > GetAllEmployees() : DataTable { > null } > > > [DataObjectMethodAttribute(DataObjectMethodType.Delete, > true)] public > DeleteEmployeeByID(employeeID : int) : > void { > throw Exception("The value passed to the delete method is " + > employeeID.ToString()); } } but I still can't find where and how and what I must to config to access it ? <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod=" ?????????? "> </asp:ObjectDataSource> Web Application doesn't support App_Code so but I can use compiled Bin somehow, the question is how ? text from this link only confused me more :( thank you

    Read the article

  • Permission denied (maybe missing INTERNET permission) when calling web service

    - by Maxim
    I'm trying to use .net SOAP web service with ksoap2 lib. Example from http://www.vimeo.com/9633556 shows how to do it correct. Below the code from that example. everything shoud work ok, but when I try to do a call inself (httpTransport.call) I get "Permission denied (maybe missing INTERNET permission)" exception. Moreover, I don't see in the Application info window among permissions the internet permission alert. Tried this on emulator and Google phone. Will be very appreciated if somebody could help with it. Thanks. public void CelsiusToFahrenheit() { String SOAP_ACTION = "http://tempuri.org/CelsiusToFahrenheit"; String METHOD_NAME = "CelsiusToFahrenheit"; String NAMESPACE = "http://tempuri.org/"; String URL = "http://www.w3schools.com/webservices/tempconvert.asmx"; SoapObject Request = new SoapObject(NAMESPACE, METHOD_NAME); Request.addProperty("Celsius", "32"); SoapSerializationEnvelope soapEnvelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); soapEnvelope.dotNet = true; soapEnvelope.setOutputSoapObject(Request); AndroidHttpTransport httpTransport = new AndroidHttpTransport(URL); try { httpTransport.call(SOAP_ACTION, soapEnvelope); SoapPrimitive resultString = (SoapPrimitive)soapEnvelope.getResponse(); res = resultString.toString(); } catch (Exception e) { e.printStackTrace(); } } AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest> <application> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <user-permission android:name="android.permission.INTERNET"></user-permission> <uses-sdk android:minSdkVersion="4" />

    Read the article

  • Web Start Application built on NetBeans Platform doesn't create desktop shortcut & start menu item

    - by rudolfv
    I've created a NetBeans Platform application that is launched using Java Web Start. I built the WAR file using the 'Build JNLP Application'-command in Netbeans. I've added a desktop shortcut and menu item to the JNLP file, but for some reason, these are not created when the application is launched. However, when I go to: Control Panel - Java - Temporary Internet Files - View - Select my application Click 'Install shortcuts to the selected application' the desktop and menu shortcuts are created correctly. Also, in the Java Console, the Shortcut Creation option is set to the following (the default, I presume): Prompt user if hinted Below is a snippet of my JNLP file: <jnlp spec="6.0+" codebase="$$codebase"> <information> <title>${app.title}</title> <vendor>SomeVendor (Pty) Ltd</vendor> <description>Some description</description> <icon href="${app.icon}"/> <shortcut online="true"> <desktop/> <menu submenu="MyApp"/> </shortcut> </information> ... I'm stumped. Does anybody have an explanation for this? Thanks PS This is on both Windows XP and Windows 7. NetBeans version: 6.8

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >