Search Results

Search found 13705 results on 549 pages for 'out of browser'.

Page 138/549 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Yahoo search: different results shown in two identical searches

    - by Marco Demaio
    Hello,simple question: searching on http://www.yahoo.it for villa matrimonio bologna I noticed Yahoo shows different results. You need to retry few times to get this done maybe exiting the browser and openeing it again, or maybe searching once and then clearing browser cookies and then search again (it's even easier to test if you use two different browsers at the same time to search for the same phrase). Anyway in order to reproduce this easily I write down here the query shown in the address bar after the search, so you can just click on these to see the results shown by entering these query: http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=yfp-t-709 http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=sfp Note the last parameter fr is different, but it's Yahoo that set it (not me), I don't even know what it means. You can see in the search box that the searched phrase is IDENTICAL in both cases. So why Yahoo is giving out different results on same search phrase? I used the same browser and performed the test in few minutes by simply trying more than once. You may also notice that the number of results returned (written on the left side of the page) is different, for the 1st search it returns 274K results, for the 2nd one 5.38M results. Actually you might think that this is just an error on Yahoo, but it's almost 1 year that while looking once in a while at some websites to see how they are ranking on Yahoo and also Google, I noticed that two searches on the same phrase show up different results even on the same day after few minutes/hours. I couldn't reproduce this behaviour also on Google so I can not say for sure, but since it seems to me it happened sometimes I was wondering if anyone of you noticed it too. Do you know if this is the normal behaviour of search engines? Because if it's normal (and it's just me that noticed it only now) I wonder how do you understand how well a site is ranking on a search engine, you could even see one of your customer's website ranking differently compared to what your customer sees on his PC.

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • Internet Explorer will not open Office files

    - by geekrutherford
    An issue was brought to my attention today at work where certain users were unable to open Office files (specifically Excel) from Internet Explorer 7.   The user would click on a button which simply generated an inline JS call to open a pop-up pointing to the .xlsx file on the server. IE would open the pop-up and then shortly thereafter the pop-up would disappear without the file ever opening.   I tweaked the security settings in the users browser...added the site to the list of trusted sites and lowered the security settings to Medium-Low. This allowed IE to at least prompt with the Save or Open message. Clicking either open resulted in "Internet Explorer Could Not Open the Site...".   Perturbed, I retreated back to Geek Central (aka my desk) and modified my application such that instead of simply pointing the browser to the file and now used Response.TransmitFile() to stream it to the browser instead. I thought to myself "this is perfect, it has to work!!!". Alas, no luck.   Bewildered and confused and returned to the lone users computer and started looking around the various IE options. I stumbled upon "Clear SSL State" under the "Content" tab. This appears to clear out all SSL certificates on the client forcing it to refresh. Doing this in concert with resetting the security levels for all zones back to their defaults seemed to do the trick.

    Read the article

  • RDA and Fusion Middleware Diagnostic Framework Integration

    - by Daniel Mortimer
    Further to my last blog entry "FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!" I have spent some time exploring how Remote Diagnostic Agent (RDA) integrates with Diagnostic Framework. Remote Diagnostic Agent, by default, collects the information about the last 10 incidents which have been captured in the Automatic Diagnostic Repository (ADR). This information can be located via RDA's Start Page Menu system. See screenshot below. Screenshot - Viewing Diagnostic Framework Incident Information in a RDA Package Note: In the next release of RDA - version 4.30 - the Diagnostic Repository menu label will have it's own position in the weblogic managed server sub menu hierarchy rather than be a child menu item of the logs menuDiagnostic Framework is also capable of launching RDA engine and including the output in an incident package. This is achieved via the command "IPS GENERATE PACKAGE". The RDA output is written to DOMAIN_HOME/servers/<server name>/adr/diag/ofm/<domain name>/<server name>/incpkg/pkg_[number]/seq_[number]/rda The RDA collected files are best viewed (as shown in the screenshot above) by opening the RDA Start Page - "DFW__start.htm" - from this directory in a browser. If you do not have a browser available on the host machine, zip the contents of the rda directory and transfer and extract to a machine which has a browser.  The explanation of the integration goes a little deeper. If you want to know more, read My Oracle Support document: Understanding RDA and FMW 11g Diagnostic Framework Integration [Document 1503644.1]

    Read the article

  • Oracle Financial Management Analytics 11.1.2.2.300 is available

    - by THE
    (guest post by Greg) Oracle Financial Management Analytics 11.1.2.2.300 is now available for download from My Oracle Support as Patch 15921734 New Features in this release: Support for the new Oracle BI mobile HD iPad client. New Account Reconciliation Management and Financial Data Quality Management analytics Improved Hyperion Financial Management analytics and usability enhancements Enhanced Configuration Utility to support multiple products. For HFM, FCM or ARM, and FDM, we support both Oracle and Microsoft SQL Server database. Simplified Test to Production migration of OFMA. Web browsers support for Oracle Financial Management Analytics: Internet Explorer Version 9 - The Oracle Financial Management Analytics supports the Internet Explorer 9 Web browser (for both 32 and 64 bit). Firefox Version 6.x - The Oracle Financial Management Analytics supports the Firefox 6.x Web browser. Chrome Version 12.x - The Oracle Financial Management Analytics supports the Chrome 12.x Web browser. See OBIEE Certification Matrix 11.1.1.6:  http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html Oracle Financial Management Analytics Compatibility: The Oracle Financial Management Analytics supports the following product version: Oracle Hyperion Financial Data Quality Management Release 11.1.2.2.300 Oracle Financial Close Manager Release 11.1.2.2.300 Oracle Hyperion Financial Management Release 11.1.2.2.300  

    Read the article

  • How to Get a Smartphone-Style Word Suggestion on Windows

    - by Zainul Franciscus
    Have you ever wished that you can type faster and better in Windows ? Then you’re in luck, because today we’ll show you how to get a smartphone’s word suggestion in Windows. To accomplish that, you need to install AI Type, a software that gives word suggestion when you write in Windows.  AI Type not only fulfils our gratification to have a smartphone-style word suggestion for Windows,  AI Type also improves our writings by suggesting word according to its context. It  will also try to match words according to the  probability in which other users may have used it. Installing AI Type is a breeze; Just download the installer from AI Type website, run the executable, fill in a registration form, and you’re all set to use AI Type for your daily writing. Once you’re done with the installation, AI Type appears on your system tray. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • Invalid Html Response and JS Errors when you open your Application in Visual Studio 2013

    - by imran_ku07
     I was working on an application which uses Telerik controls. The application was working fine for a while. Suddenly, the application stopped working. I mean lot of my application pages becoming very very ugly. I found JavaScript errors on every Browser's console. When I check the page view-source, the generated HTML was messy and invalid. This was only happening with my local machine. If someone else on my network accesses my application pages, he will get the correct HTML and no JavaScript errors. My mind was blowing because the same page was generating invalid HTML(and JavaScript errors) when I access the page using a local browser but generate correct HTML(and no JavaScript errors) when someone else access my application page remotely. Then I realized that I the only change I made last was opening my application in Visual Studio 2013 RTM which I installed few days ago. I closed the Visual Studio 2013, everything work like a charm. Then I became100% sure that this is only happening due to new Visual Studio 2013 feature called Browser Link. I just open the application again and add this in web.config. Everything become fine Happy coding :)   <add key="vs:EnableBrowserLink" value="false" />

    Read the article

  • Windows 8: SL and HTML

    - by xamlnotes
    I  was just pointed to comment on my friend Andrew Brust’s blog about Silverlight versus HTML 5. Andrews blog is here: http://geekswithblogs.net/andrewbrust/archive/2011/11/23/windows-8-will-be-here-tomorrow-but-should-silverlight-be.aspx#600915 You can get another idea from another friend of mine Billy Hollis here: http://geekswithblogs.net/jalexander/archive/2011/04/09/the-eternal-battle-rich-v.-reachhellip--guest-blogger-billy-hollis.aspx The commenter is raving about HTML 5 and how that’s the future and SL is not. Well, my reaction is “hogwash”. Sure, HTML 5 is important and does some interesting stuff. Checkout what Bing.com is doing with it on some days and you can see. But to say that XAML is dead is nuts. I have been wrapping up bugs on a cross browser version of an application for awhile now. Whats the state of cross browser today? Well, better than a few years ago but far from perfect.  Each browser vendor interprets the specs in a little different way and you must account for them. The worst offender for major browsers? Apple and its Safari.  I had to make more changes for it than any other. Whats that got to do with XAML and SL/WPF?  Well, you write your SL code once and it runs in all browsers that support it, no changes. ipad does not? Well, they should be taken to court and forced too just like MS and others have been in the past for locking out competitors. Line of business applications? Write them in SL or WPF or both.  Use the power of XAML witch far out reaches html in any flavor and move on. We do need HTML 5 but its not a panacea nor will it replace all other technologies.

    Read the article

  • UNESCO, J-ISIS, and the JavaFX 2.2 WebView

    - by Geertjan
    J-ISIS, which is the newly developed Java version of the UNESCO generalized information storage and retrieval system for bibliographic information, continues to be under heavy development and code refactoring in its open source repository. Read more about J-ISIS and its NetBeans Platform basis here. Soon a new version will be available for testing and it would be cool to see the application in action at that time. Currently, it looks as follows, though note that the menu bar is under development and many menus you see there will be replaced or removed soon: About one aspect of the application, the browser, which you can see above, Jean-Claude Dauphin, its project lead, wrote me the following: The DJ-Native Swing JWebBrowser has been a nice solution for getting a Java Web Browser for most popular platforms. But the Java integration has always produced from time to time some strange behavior (like losing the focus on the other components after clicking on the Browser window, overlapping of windows, etc.), most probably because of mixing heavyweight and lightweight components and also because of our incompetency in solving the issues. Thus, recently we changed for the JavaFX 2.2 WebWiew. The integration with Java is fine and we have got rid of all the DJ-Native Swing problems. However, we have lost some features which were given for free with the native browsers such as downloading resources in different formats and opening them in the right application. This is a pretty cool step forward, i.e., the JavaFX integration. It also confirms for me something I've heard other people saying too: the JavaFX WebView component is a perfect low threshold entry point for Swing developers feeling their way into the world of JavaFX.

    Read the article

  • Send less Server Data with "AFK"

    - by Oliver Schöning
    I am working on a 2D (Realtime) MultiPlayer Game. With Construct2 and a Socket.IO JavaScript Server. Right now the code does not include the Array for each Player. var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; }); }); setInterval(function() { io.sockets.emit("message", 'Pos,' + x); },100); I noticed a very annoying problem with my server today. It sends my X Coordinates every 100 milliseconds. The Problem was, that when I went into another Browser Tab, the Browser stopped the Game from running. And when I went back, I think the Game had to run through all the packages. Because my Offline Debugging Button still worked immediately and the Online Button only responded after some seconds. So then I changed my Code so that it would only send out an update when it received a player Input: var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; io.sockets.emit("message", 'Pos,' + x); }); }); And it Updated Immediately, even when I had been inactive on the Browser Tab for a long time. Confirming my suspicion that it had to get through all the data. Confirm Please! It would be insane to only send information on Client Input in a Real Time Game. But how would I write a AFK function? I would think it is easier to run a AFK Boolean Loop on the Server. Here is what I need help for: playerArray[Me] if ( "Not Given any Input for X amount of Seconds" ) { "Don't send Data" } else { "Send Data" }

    Read the article

  • Ubuntu One, compressed files

    - by user8179
    I have uploaded some files to my Ubuntu One account and it seems to work great most of the time. I usually upload them directly from Nautilus by right clicking the folder, using the ”Synchronize this folder” option, and then I make sure that the file I want to upload is published. Then I usually test the whole thing by trying to download it. I right click the file again to get its URL and I paste it into my Web browser. This usually works fine. But yesterday I uploaded two compressed files – ”.tar.bz2”. When I tried to open them after downloading them with my Web browser (Opera), it failed. I found that the file was bigger than the original file (2358 B instead af 2335 B – 15 B added at the beginning of the file and 8 B added to the end), and someone at the Opera channel (IRC) at OperaNet (Europe) figured out that the reason for this is that the server compress the file again, ”without telling Opera”. So to be able to extract the file I need to add ”.gz” to the file name and then extract it twice. If I downloaded it with Firefox however, I didn't need to do that, so maybe Firefox figured this out somehow in a way that Opera does not. Someone also tried to download the file with wget and some other browser and he also got the same result as I did with Opera, that is the file is compressed a second time by the server. I guess ”the server” is the Ubuntu One server, right? So why is this? Could it be done better somehow? Or did I do something wrong when uploading the files? It also seems like this extra compressing thing does not always happen, because when I tried again a few minutes ago, the file came down with its right size (2335 B), without an extra compression. But the other file (114 MiB) was still compressed twice.

    Read the article

  • Groovy Grapes in NetBeans IDE

    - by Geertjan
    The start of Groovy Grapes support in NetBeans IDE. Below you see a pure Groovy project, with the Groovy JAR and the Ivy JAR automatically on its classpath. There's also a Groovy script that makes use of a @Grab annotation. In the bottom left, in the Services window, you also see a Grape Repository browser, i.e., showing you the JARs that are currently in ".groovy/grapes". Click the images below to get a better look at them. Next, you see what happens when the project is run. The @Grab annotation automatically starts downloading the JARs that are needed and puts them into the ".groovy/grapes" folder. However, the "no suitable classloader found for grab" error message (which Google shows is a problem for lots of developers) prevents the application from running successfully: The final screenshot shows that I've put the JARs that I need onto the classpath of the project. I did that manually, hoping to learn from the NetBeans Maven project or the NetBeans Gradle project how to do that automatically. Also note that the @Grab annotation has been commented out. Now the error message about the classloader is avoided and the project runs. What needs to happen for Groovy Grapes support to be complete in NetBeans IDE: Figure out how to add the downloaded JARs to the project classpath automatically. Fix the refresh problem in the Grape Repository browser, i.e., right now the refresh doesn't happen automatically yet. Hopefully find a way to get around the grab classloader problem, i.e., it's not ideal that one needs to comment out the annotation. Let the user specify a different Grape repository, i.e., right now ".groovy/grapes" is assumed, but the user should be able to point the repository browser to something different. Maybe there should be support for multiple Grape repositories? Comments/feedback/help is welcome.

    Read the article

  • Unable to view 2 local sites over network

    - by gentrobot
    I have 2 websites running on my local machine that I'd like to view from other machines on the same network. For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site1.com DocumentRoot /var/www/answers/app/webroot DirectoryIndex index.php <Directory "/var/www/answers/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site2.com DocumentRoot /var/www/answers2/app/webroot DirectoryIndex index.php <Directory "/var/www/answers2/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have added 2 entries in the /etc/hosts file as: 127.0.0.1 site1.com 127.0.0.1 site2.com Now, when I point the browser on my machine to site1.com, it shows me the first site and pointing the browser to site2.com, it shows me the second site. However,when I type in the local IP of my machine in the browser, it always shows site2. How can I change it to switch between site1 and site2 ? Is there a way that I can view both the sites form another machine (esp. mobile devices over wireless network) ?

    Read the article

  • Advanced Experiments with JavaScript, CSS, HTML, JavaFX, and Java

    - by Geertjan
    Once you're embedding JavaScript, CSS, and HTML into your Java desktop application, via the JavaFX browser, a whole range of new possibilities open up to you. For example, here's an impressive page on-line, notice that you can drag items and drop them in new places: http://nettuts.s3.amazonaws.com/127_iNETTUTS/demo/index.html The source code of the above is provided too, so you can drop the various files directly into your NetBeans module and use the JavaFX WebEngine to load the HTML page into the JavaFX browser. Once the JavaFX browser is in a NetBeans TopComponent, you'll have the start of an off-line news composer, something like this: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); webengine = view.getEngine(); URL url = getClass().getResource("index.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); NodeList list = document.getElementById("columns").getChildNodes(); for (int i = 0; i < list.getLength(); i++) { EventTarget et = (EventTarget) list.item(i); et.addEventListener("click", new EventListener() { @Override public void handleEvent(Event evt) { instanceContent.add(new Date()); } }, true); } } } }); The above is the code showing how, whenever a news item is clicked, the current date can be published into the Lookup. As you can see, I have a viewer component listening to the Lookup for dates.

    Read the article

  • Google Chrome with strange behavior

    - by user72274
    I'm former Chromium-browser user, but after not upgrading the PPA for 2 months, I switched to Google Chrome browser yesterday. Everything is okay, except some strange behavior on some pages and crashing after loading "chrome://" configuration pages. The best known website with strange behavior is youtube, there is a picture what I see: When I open user menu in top right corner, it crashes that way and even after closing the menu, some parts of menu stay display. You may say it's Youtube problem, no, I have this problem at least on three other websites, here it is on Imgur: The problem isn't for the whole side, sometimes it happens from the middle of the screen. The interesting part is that it happens everytime in the same distance from the right border. When I check the DOM elements with the Developer tool, the overlay which shows element's position is rendered how it should be. What is more, if there is anchor after the crashed area, it works after clicking on it. Selecting text in crashed page is impossible. I hope there is enough information to give me an advice, thanks in advance. :) EDIT: Here is what the browser posted in "chrome://gpu-internals/": Graphics Feature Status Canvas: Software only, hardware acceleration unavailable Compositing: Hardware accelerated 3D CSS: Hardware accelerated CSS Animation: Software animated. WebGL: Hardware accelerated WebGL multisampling: Hardware accelerated Problems Detected Accelerated CSS animation has been disabled at the command line. Accelerated 2d canvas is unstable in Linux at the moment. Ubuntu 12.04 | Gnome-shell 3.4.1 | ATI Radeon 4550 | Screen resolution 1024*768 | Chrome version 20.0.1132.57 (Official Build 145807)

    Read the article

  • Oracle Secure Global Desktop (SGD) 5.1

    - by wcoekaer
    Last week, we released the latest update of Oracle Secure Global Desktop. Release 5.1 introduces a number of bug fixes and smaller changes but the most interesting one is definitely increased support for html5-based client access. In SGD 5.0 we added support for Apple iPads using Safari to connect to SGD and display your session right inside the browser. The traditional model for SGD is that you connect using a webbrowser to the webtop and applications that are displayed locally using a local client (tta). This client gets installed the first time you connect. So in the traditional model (which works very well...) you need a webbrowser, java and the tta client. With the addition of html5 support, there's no longer a need to install a local client, in fact, there is also no longer a need to have java installed. We currently support Chrome as a browser to enable html5 clients. This allows us to enable html5 on the android devices and also on desktops running Chrome (Windows, MacOS X, Linux). Connections will work transparently across proxy servers as well. So now you can run any SGD published app or desktop right from your webbrowser inside a browser window. This is very convenient and cool.

    Read the article

  • Facebook Oauth Logout

    - by Derek Troy-West
    I have an application that integrates with Facebook using Oauth 2. I can authorize with FB and query their REST and Graph APIs perfectly well, but when I authorize an active browser session is created with FB. I can then log-out of my application just fine, but the session with FB persists, so if anyone else uses the browser they will see the previous users FB account (unless the previous user manually logs out of FB also). The steps I take to authorize are: Call [LINK: graph.facebook.com/oauth/authorize?client_id...] This step opens a Facebook login/connect window if the user's browser doesn't already have an active FB session. Once they log-in to facebook they redirect to my site with a code I can exchange for an oauth token. Call [LINK: graph.facebook.com/oauth/access_token?client_id..] with the code from (1) Now I have an Oauth Token, and the user's browser is logged into my site, and into FB. I call a bunch of APIs to do stuff: i.e. [LINK: graph.facebook.com/me?access_token=..] Lets say my user wants to log out of my site. The FB terms and conditions demand that I perform Single Sign Off, so when the user logs out of my site, they also are logged out of Facebook. There are arguments that this is a bit daft, but I'm happy to comply if there is any way of actually achieving that. I have seen suggestions that: A. I use the Javascript API to logout: FB.Connect.logout(). Well I tried using that, but it didn't work, and I'm not sure exactly how it could, as I don't use the Javascript API in any way on my site. The session isn't maintained or created by the Javascript API so I'm not sure how it's supposed to expire it either. B. Use [LINK: facebook.com/logout.php]. This was suggested by an admin in the Facebook forums some time ago. The example given related to the old way of getting FB sessions (non-oauth) so I don't think I can apply it in my case. C. Use the old REST api expireSession or revokeAuthorization. I tried both of these and while they do expire the Oauth token they don't invalidate the session that the browser is currently using so it has no effect, the user is not logged out of Facebook. I'm really at a bit of a loose end, the Facebook documentation is patchy, ambiguous and pretty poor. The support on the forums is non-existant, at the moment I can't even log in to the facebook forum, and aside from that, their own FB Connect integration doesn't even work on the forum itself. Doesn't inspire much confidence. Ta for any help you can offer. Derek ps. Had to change HTTPS to LINK, not enough karma to post links which is probably fair enough.

    Read the article

  • Add Silverlight Bing Maps Control to Windows Mobile 7 application

    - by Jacob
    I know the bits just came out today, but one of the first things I want to do with the newly released Windows Mobile 7 SDK is put a map up on the screen and mess around. I've downloaded the latest version of the Silverlight Maps Control and added the references to my application. As a matter of fact, the VS 2010 design view of the MainPage.xaml shows the map control after adding the namespace and placing the control. I'm using the provided VS 2010 Express version that comes with the Win Mobile 7 SDK and have just used the New Project - Windows Phone Application template. When I try to build I get two warnings related to the Microsoft.Maps.MapControl dll's. Warning 1 The primary reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". Warning 2 The primary reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". I'm leaning towards some way of adding the System.Windows.Browser to the targeted framework version. But I'm not even sure if that is possible. To be more specific; I'm looking for a way to get the Silverlight Maps Control up on a Windows Phone 7 series application. If possible. Thanks.

    Read the article

  • How do I merge a transient entity with a session using Castle ActiveRecordMediator?

    - by Daniel T.
    I have a Store and a Product entity: public class Store { public Guid Id { get; set; } public int Version { get; set; } public ISet<Product> Products { get; set; } } public class Product { public Guid Id { get; set; } public int Version { get; set; } public Store ParentStore { get; set; } public string Name { get; set; } } In other words, I have a Store that can contain multiple Products in a bidirectional one-to-many relationship. I'm sending data back and forth between a web browser and a web service. The following steps emulates the communication between the two, using session-per-request. I first save a new instance of a Store: using (new SessionScope()) { // this is data from the browser var store = new Store { Id = Guid.Empty }; ActiveRecordMediator.SaveAndFlush(store); } Then I grab the store out of the DB, add a new product to it, and then save it: using (new SessionScope()) { // this is data from the browser var product = new Product { Id = Guid.Empty, Name = "Apples" }); var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); ActiveRecordMediator.SaveAndFlush(store); } Up to this point, everything works well. Now I want to update the Product I just saved: using (new SessionScope()) { // data from browser var product = new Product { Id = Guid.Empty, Version = 1, Name = "Grapes" }; var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); // throws exception on this call ActiveRecordMediator.SaveAndFlush(store); } When I try to update the product, I get the following exception: a different object with the same identifier value was already associated with the session: 00000000-0000-0000-0000-000000000000, of entity:Product" As I understand it, the problem is that when I get the Store out of the database, it also gets the Product that's associated with it. Both entities are persistent. Then I tried to save a transient Product (the one that has the updated info from the browser) that has the same ID as the one that's already associated with the session, and an exception is thrown. However, I don't know how to get around this problem. If I could get access to a NHibernate.ISession, I could call ISession.Merge() on it, but it doesn't look like ActiveRecordMediator has anything similar (SaveCopy threw the same exception). Does anyone know the solution? Any help would be greatly appreciated!

    Read the article

  • Cannot add Silverlight Maps Control to Windows Mobile 7 application

    - by Jacob
    I know the bits just came out today, but one of the first things I want to do with the newly released Windows Mobile 7 SDK is put a map up on the screen and mess around. I've downloaded the latest version of the Silverlight Maps Control and added the references to my application. As a matter of fact, the VS 2010 design view of the MainPage.xaml shows the map control after adding the namespace and placing the control. I'm using the provided VS 2010 Express version that comes with the Win Mobile 7 SDK and have just used the New Project - Windows Phone Application template. When I try to build I get two warnings related to the Microsoft.Maps.MapControl dll's. Warning 1 The primary reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". Warning 2 The primary reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". I'm leaning towards some way of adding the System.Windows.Browser to the targeted framework version. But I'm not even sure if that is possible. To be more specific; I'm looking for a way to get the Silverlight Maps Control up on a Windows Phone 7 series application. If possible. Thanks.

    Read the article

  • How to design this ?

    - by Akku
    how can i make this entire process as 1 single event??? http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html and draw the chart on single click? I am new to servlets please guide me When a user clicks the "go " button with some input. The data goes to the servlet say "Test3". The servlet processes the data by the user and generates/feeds the data table dynamically Then I call the html page to draw the chart as shown in the tutorial link above. The problem is when I call the servlet it gives me a long json string in the browser as given in the tutorials "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'1333639331',table:{cols:[{............................" Then when i manually call the html page to draw the chart i am see the chart. But when I call html page directly using the request dispatcher via the servlet I dont get the result. This is my code and o/p...... I need sugession as to how should be my approach to call the chart public class Test3 extends HttpServlet implements DataTableGenerator { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { DataSourceHelper.executeDataSourceServletFlow(request, response, this , isRestrictedAccessMode() ); RequestDispatcher rd; rd = request.getRequestDispatcher("new.html");// it call's the html page which draws the chart as per the data added by the servlet..... rd.include(request, response);//forward(request, response); @Override public Capabilities getCapabilities() { return Capabilities.NONE; } protected boolean isRestrictedAccessMode() { return false; } @Override public DataTable generateDataTable(Query query, HttpServletRequest request) { // Create a data table. DataTable data = new DataTable(); ArrayList<ColumnDescription> cd = new ArrayList<ColumnDescription>(); cd.add(new ColumnDescription("name", ValueType.TEXT, "Animal name")); cd.add......... I get the following result along with unprocessed html page google.visualization.Query.setResponse({version:'0.6',statu..... <html> <head> <title>Getting Started Example</title> .... Entire html page as it is on the Browser. What I need is when a user clicks the go button the servlet should process the data and call the html page to draw the chart....Without the json string appearing on the browser.(all in one user click) What should be my approach or how should i design this.... there are no error in the code. since when i run the servlet i get the json string on the browser and then when i run the html page manually i get the chart drawn. So how can I do (servlet processing + html page drawing chart as final result) at one go without the long json string appearing on the browser. There is no problem with the html code....

    Read the article

  • The remote server returned an error: NotFound.

    - by xscape
    Hi, I'm trying to retrieve a string in my old webservice but it give me an error of The remote server returned an error: NotFound. and its InnerException is {System.Net.WebException: The remote server returned an error: NotFound. --- System.Net.WebException: The remote server returned an error: NotFound. at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult) at System.Net.Browser.BrowserHttpWebRequest.<c_DisplayClass5.b_4(Object sendState) at System.Net.Browser.AsyncHelper.<c_DisplayClass2.b_0(Object sendState) --- End of inner exception stack trace --- at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)} this is the method which error prompted, this method returns a string format void client_ValidateUserEncryptedCompleted(object sender, DummyWS.ValidateUserEncryptedCompletedEventArgs e) { object token = e.Result; client = new DummyWS.MachineHistoryWSSoapClient(); if (token != null) { client.GetSummaryXMLAsync(token, "", ""); } } I am currently using Silverlight 4.0 and my ServiceReferences.ClientConfig is <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="MachineHistoryWSSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost/MHVwsModified/MachineHistoryWS.asmx" binding="basicHttpBinding" bindingConfiguration="MachineHistoryWSSoap" contract="DummyWS.MachineHistoryWSSoap" name="MachineHistoryWSSoap" /> </client> </system.serviceModel> My Web.Config in my web service is <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /></assemblies></compilation> <authentication mode="Windows" /> </system.web> <system.webServer> <directoryBrowse enabled="true" /> </system.webServer> Any help will be aprreciated thank you.

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • Naming selenium grid nodes. Spawning a specific node

    - by ???? ????
    I'm trying to implement a kind of default queues in selenium hub. There is a possibility to specify node's name (actually its environment, smth like "firefox on ubuntu" or "chrome on windows"). Selenium grid itself has a default queue, it works according to 'First In, First Out' principle. But I want to prioritize some of my tasks given to selenium server. I have no possibility to introduce custom queue (seems like there is no API for that), that's why I decided to separate queue's logic from selenium server. I'll only call a specific node with specific name (environment) for example "firefox important node" or smth like that. So, I want to know how to directly tell selenium which node to use for my task? And generally, am I thinking in a right way? Here are my configs: hubConfig.json.erb { "host": null, "port": <%= node[:selenium][:server][:port] %>, "newSessionWaitTimeout": -1, "servlets" : [], "prioritizer": null, "capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher", "throwOnCapabilityNotPresent": true, "nodePolling": <%= node[:selenium][:server][:node_polling] %>, "cleanUpCycle": <%= node[:selenium][:server][:cleanup_cycle] %>, "timeout": <%= node[:selenium][:server][:timeout] %>, "browserTimeout": 0, "maxSession": <%= node[:selenium][:server][:max_session] %> } nodeConfig.json.erb { "capabilities": [ { "browserName": "firefox", "maxInstances": 5, "seleniumProtocol": "WebDriver" }, { "browserName": "chrome", "maxInstances": 5, "seleniumProtocol": "WebDriver" }, { "browserName": "phantomjs", "maxInstances": 5, "seleniumProtocol": "WebDriver" } ], "configuration": { "proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy", "maxSession": <%= node[:selenium][:node][:max_session] %>, "port": <%= node[:selenium][:node][:port] %>, "host": "<%= node[:fqdn] %>", "register": true, "registerCycle": <%= node[:selenium][:node][:register_cycle] %>, "hubPort": <%= node[:selenium][:server][:port] %> } } And my Driver class: ... def remote_driver @browser = Watir::Browser.new(:remote, :url => "http://myhub.com:4444/wd/hub", :http_client => client, :desired_capabilities => capabilities ) end def capabilities Selenium::WebDriver::Remote::Capabilities.send( "firefox", :javascript_enabled => true, :css_selectors_enabled => true, :takes_screenshot => true ) end def client client = Selenium::WebDriver::Remote::Http::Default.new client.timeout = 360 client end ... I still don't know how to use specified node for my task. If I try to start a driver adding :name => "firefox important node" and extend nodeConfig.json.erb's configuration with environments: - name: "firefox important node" browser: "*firefox" - name: "Firefox36 on Linux" browser: "*firefox" selenium just starts random firefox browser on a random node. How can I control it?

    Read the article

  • Postfix certificate verification failed for smtp.gmail.com

    - by Andi Unpam
    I have problem, my email server using postfix with gmail smtp, i use account google apps, but always ask for SASL authentication failed, I sent an email using php script, after I see the error logs in the wrong password, after I open the URL from the browser and no verification postfixnya captcha and could return, but after 2-3 days later happen like that again. This my config postfix #myorigin = /etc/mailname smtpd_banner = Hostingbitnet Mail Server biff = no append_dot_mydomain = no readme_directory = no myhostname = webmaster.hostingbitnet.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost, webmaster.hostingbitnet.com, localhost.localdomain, 103.9.126.163 relayhost = [smtp.googlemail.com]:587 relay_transport = relay relay_destination_concurrency_limit = 1 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 103.9.126.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = smtp relayhost = [smtp.gmail.com]:587 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/google-apps smtp_sasl_security_options = noanonymous smtp_use_tls = yes smtp_sender_dependent_authentication = yes tls_random_source = dev:/dev/urandom default_destination_concurrency_limit = 1 smtp_tls_CAfile = /etc/postfix/tls/root.crt smtp_tls_cert_file = /etc/postfix/tls/cert.pem smtp_tls_key_file = /etc/postfix/tls/privatekey.pem smtp_tls_session_cache_database = btree:$data_directory/smtp_tls_session_cache smtp_tls_security_level = may smtp_tls_loglevel = 1 smtpd_tls_CAfile = /etc/postfix/tls/root.crt smtpd_tls_cert_file = /etc/postfix/tls/cert.pem smtpd_tls_key_file = /etc/postfix/tls/privatekey.pem smtpd_tls_session_cache_database = btree:$data_directory/smtpd_tls_session_cache smtpd_tls_security_level = may smtpd_tls_loglevel = 1 #secure smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination Log from mail.log Oct 30 14:51:13 webmaster postfix/smtp[9506]: Untrusted TLS connection established to smtp.gmail.com[74.125.25.109]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 30 14:51:15 webmaster postfix/smtp[9506]: 87E2739400B1: SASL authentication failed; server smtp.gmail.com[74.125.25.109] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 ix9sm156630pbc.7 Oct 30 14:51:15 webmaster postfix/smtp[9506]: setting up TLS connection to smtp.gmail.com[74.125.25.108]:587 Oct 30 14:51:15 webmaster postfix/smtp[9506]: certificate verification failed for smtp.gmail.com[74.125.25.108]:587: untrusted issuer /C=US/O=Equifax/OU=Equifax Secure Certificate Authority Oct 30 14:51:16 webmaster postfix/smtp[9506]: Untrusted TLS connection established to smtp.gmail.com[74.125.25.108]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 30 14:51:17 webmaster postfix/smtp[9506]: 87E2739400B1: to=<[email protected]>, relay=smtp.gmail.com[74.125.25.108]:587, delay=972, delays=967/0.03/5.5/0, dsn=4.7.1, status=deferred (SASL authentication failed; server smtp.gmail.com[74.125.25.108] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 s1sm3850paz.0) Oct 30 14:51:17 webmaster postfix/error[9508]: B3960394009D: to=<[email protected]>, orig_to=<root>, relay=none, delay=29992, delays=29986/5.6/0/0.07, dsn=4.7.1, status=deferred (delivery temporarily suspended: SASL authentication failed; server smtp.gmail.com[74.125.25.108] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 s1sm3850paz.0) BTW I made cert follow the link here http://koti.kapsi.fi/ptk/postfix/postfix-tls-cacert.shtml and it worked, but after 2/3 days my email back to problem invalid SASL, and then i'm required to log in use a browser and enter the captcha there but success log in after input captcha, and my email server can send emails from telnet or php script. but it will be back in trouble after 2/3days later. My question is how to make it permanent certificate? Thanks n greeting.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >