Search Results

Search found 8113 results on 325 pages for 'explorer tabs'.

Page 176/325 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • What do I need to do to make a WPF Browser Application (XBAP) that requires Full Trust work on Windo

    - by Benoit J. Girard
    So this is a Visual Studio 2008, .NET, WPF, XBAP, Windows 7 question, regarding .NET trust policies. At work, we have several Web Browser Applications (.XBAP files) developed with Visual Studio 2008 (so .NET 3.5) that we deployed internally. These required a .NET FullTrust policy, we found a way to make a .MSI that adjusted the policy on individual stations, everything worked great. Users love in-browser apps. This was last year and on Windows XP. This year our company started upgrading users to Windows 7, and now none of our Web Browser Applications work. The error message is "Trust Not Granted", as if the policy-changing .MSI had not been run. Other details: I can confirm that our apps work on Windows XP for Internet Explorer 7 and Firefox, and do not work on Windows 7 for Internet Explorer 8 nor Firefox. I must admit that .NET security policies mystify me. Still, I could not find any mention of this problem on the Net at large or on this site. Did anybody else encounter this problem? Any and all help welcome.

    Read the article

  • Shopify JSONP issue in ajaxAPI

    - by Aaron U
    I'm getting some odd response back from shopify ajaxapi for jsonp. If you cURL a Shopify ajax api location http://storename.domain.com/cart.json?callback=handler you will get a jsonp response. But something is breaking the same request in browsers. It appears to be related to compression? Here are some responses from each browser when attempting to call the jsonp as documented. Firefox: The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. Internet Explorer: Internet Explorer cannot display the webpage Chrome/Safari/Webkit: Cannot decode raw data, or failed (chrome) Attempted use via jquery: $.getJSON('http://storename.domain.com/cart.json?callback=?', function(data) { ... }); // Results in a failed request, viewable network request panels of dev tools Here is some output from cURL including response headers: $ curl -i http://storename.domain.com/cart.json?callback=CALLBACK_FUNC HTTP/1.1 200 OK Server: nginx Date: Tue, 18 Dec 2012 13:48:29 GMT Content-Type: application/javascript; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 200 OK ETag: cachable:864076445587123764313132415008994143575 Cache-Control: max-age=0, private, must-revalidate X-Alternate-Cache-Key: cachable:11795444887523410552615529412743919200 X-Cache: hit, server X-Request-Id: a0c33a55230fe42bce79b462f6fe450d X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _session_id=b6ace1d7b0dbedd37f7787d10e173131; path=/; HttpOnly X-Runtime: 0.033811 P3P: CP="NOI DSP COR NID ADMa OPTa OUR NOR" CALLBACK_FUNC({"token":null,"note":null,"attributes":{},"total_price":0,...}) Also related unanswered here: Shopify Ajax API JSONP supported? Thanks

    Read the article

  • How to remove accidental branch in TortoiseHg?

    - by msorens
    (I am a relative newcomer to TortoiseHg, so bear with me :-) I use TortoiseHg on two machines to talk to my remote source repository. I made changes on one machine, committed them, and attempted to push them to the remote repository BUT I forgot to first do a pull to get the latest code first. The push gave me a few lines of output, suggesting I may have forgotten to pull first (true!) and mentioned something like "abort: push creates new remote branches...". So I did a pull, which added several nodes to the head of my graph in the repository explorer. The problem is that the push I tried to do is now showing as a branch in the repository explorer. Looking from the server side (codeplex), it shows no sign of my attempted push, indicating this accidental branch is still local on my machine. How could I remove this accidental branch? I tried selecting that node in the graph then doing "revert" but it did not seem to do anything. I am wondering if it would be simplest to just discard my directory tree on my local machine and do a completely new, clean pull from the server...?

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • how to run XSL file using JavaScript / HTML file

    - by B. Kumar
    i want to run xsl file using javascript function. I wrote a javascrpt function which is working well with Firefox and Crom but it is not working on Internet Explorer function loadXMLDoc(dname) { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest(); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xhttp.open("GET",dname,false); xhttp.send(""); return xhttp.responseXML; } function displayResult() { xml=loadXMLDoc("NewXml.xml"); xsl=loadXMLDoc("NewFile.xsl"); // code for IE if (window.ActiveXObject) { ex=xml.transformNode(xsl); document.getElementById("example").innerHTML=ex; } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xsltProcessor=new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml,document); document.getElementById("example").appendChild(resultDocument); } } Please help my by modifying this code or by another code so that i can work with Internet Explorer. Thanks

    Read the article

  • How to make a product catalog in C#?

    - by Ervin
    I need to develop a product catalog (about 4000 products) application, which would be given to clients on CD or DVD. The catalog exists in webpage format using PHP and MySQL. IMPORTANT: the application is given to clients who maight have old PC, old System. For minimal requirements I would put Windows XP and Internet Explorer 6 (if needed). I need the following features: 1 search option (after productID AND after keyword) 2 print option (by selecting multiple products) 3 shopping cart (making a list which will be sent to an email address if there is any Internet Connection on the computer) When I was asked to do it I had 2 days to realise a very basic version, so I took the whole website and exported it in HTML pages, and developed an application in C# which contains an embeded browser. So the whole website is now static and put on a CD. Everything fine so far. Now here are the problems: 1. the search option was realized by parsing the html files and reading the productID or looking for keywords inside of them. Put on a CD it was extremely slow (searching in 600MB of html files). FOR THIS I WOULD NEED A SOLUTION WITH A STATIC DATABASE (USING ACCESS OR SOMETHING) TO HAVE INDEXED ROWS, SO THE SEARCH COULD BE A VERY FAST ONE. 2. the printing option was a simply call of the embeded Internet Explorer print functions. Here are two problems: a) user needs IE7 for printing the website scaled (FIT TO PAGE), otherwise the edges of the page are cut down. b) users of this app does not have even the basic PC usage skills, so they can't set the printing settings, so there will appear in header and footer the page numbers and titles. QUESTION: can I set these settings from CSS for printing? 3. couldn't make a a shopping cart as I don't use a database, so I have static websites and content is inside the HTML. QUESTION: WHICH ARE THE BEST SOLUTIONS FOR THE PROBLEMS DESCRIBED ABOVE? PLEASE ANSWER EVEN IF YOUR ANSWER IS FOR ONE QUESTION ONLY. THANKS

    Read the article

  • Design suggestions for creating document management structure using hidden shares.

    - by focus.nz
    I need to add some document management functionality into my software. Documents will be grouped by company name and project name. The folders need to be accessed by the application using the id numbers of clients/projects, but also easily browsed by the end user using windows explorer. Clients and Projects will be stored in a database. I am thinking of having the software create the folders using the friendly name and then using a hidden share with the id number for the software to access the files. The folder structure would be something like this --Company 1 (Company-1234$) -- Project 101 (Project-101$) -- Project 102 (Project-102$) -- Project 103 (Project-103$) -- Company 2 (Company-5678$) -- Project 201 (Project-201$) -- Project 202 (Project-202$) -- Project 203 (Project-203$) So in the example above there would be a company called "Company 1" with a ID of "1234". When browsing the folders using windows explorer the user would see \\ServerName\Documents\Company1 and you could also access the same folder from \\ServerName\Documents\Company-1234$ By using the hidden share, if the company name changes or its renamed for some reason it doesn't break the link in the application because its using the hidden shared based on the ID that never changes. Will having hundreds (maybe thousands) or hidden shares on a server provide a huge performance hit? Does any one have any suggestions or alternatives to provide this feature?

    Read the article

  • Why does my call to Activator.CreateInstance intermittently fail?

    - by Daniel Stutzbach
    I'm using the following code to access the Windows Explorer Shell's band site service: Guid GUID_TrayBandSiteService = new Guid(0xF60AD0A0, 0xE5E1, 0x45cb, 0xB5, 0x1A, 0xE1, 0x5B, 0x9F, 0x8B, 0x29, 0x34); Type shellTrayBandSiteService = Type.GetTypeFromCLSID(GUID_TrayBandSiteService, true); site = Activator.CreateInstance(shellTrayBandSiteService) as IBandSite; Mostly, it works great. A very small percentage of the time (less than 1%), the call to Activator.CreateInstance throws the following exception: System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {F60AD0A0-E5E1-45CB-B51A-E15B9F8B2934} failed due to the following error: 80040154. at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) I've looked up the error code, and it appears to indicate that the service isn't registered. That's nonsense in my case, since I'm trying to access a service provided by the operating system (explorer.exe, to be specific). I'm stumped. What might cause Activator.CreateInstance fail, but only rarely?

    Read the article

  • Sending basic authentication information via form

    - by VolatileStorm
    I am working on a site that currently uses a basic authentication dialog box login system, that is the type of dialog that you get if you go here: http://www.dur.ac.uk/vm.boatclub/password/index.php I did not set this system up and am not in a position to easily/quickly work around it, but it DOES work. The issue however is that the dialog box is not very helpful in telling you what login information you have to use (that is which username and password combination), and so I would like to replace it with a form. I had been thinking that this wasn't possible but I wanted to ask in order to find out. Is it possible to set up an HTML form that sends the data to the server such that it accepts it in the same way that it would using this dialog box? Alternatively is it possible to set up a PHP script that would take normal form data and process it somehow passing it to the server such that it logs in? Edit: After being told that this is basic authentication I went around and have managed to find a way that works and keeps the user persistently logged in. However, this does not work in internet explorer. The solution was simply to redirect the user to: http://username:[email protected]/vm.boatclub/password/index.php But Internet Explorer removed it due to phishing uses about 3 years ago. Is there a way to use javascript to get the browser to access the site in this way? Or will I have to simply change my UI?

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • Use of 'this keyword' javascript in IE ?

    - by Ron
    Is there a workaround for Internet Explorer to implement the functionality offered by 'this' javascript keyword to get the dom element that triggered the event? My problem scenario is : I have a variable number of text fields in the html form, like input type="text" id="11" input type="text" id="12" .. I need to handle the "onchange" event for each text field, and the handling is dependent on the 'id' of the field that triggered the event. So far I understand that my options are: 1) attach a dedicated event handler for each text field. so if I have n fields, i have n different functions, something like: input type="text" id="11" onchange="function11();" input type="text" id="12" onchange="function12();" but the text fields are added and removed dynamically, so a better way would be to have one generic function instead. 2) use the 'this' keyword like: input type="text" id="11" onchange="functionGeneric(this);" input type="text" id="12" onchange="functionGeneric(this);" But this option does not work with Internet Explorer. Can anyone suggest a work around for getting it work in IE or some other solution that can be applied here? Thanks.

    Read the article

  • UML Modelling in C++Builder 2010 Professional

    - by Gordon Brandly
    I'd like to do some basic class diagram UML models in the Pro version of C++Builder 2010. Embarcadero has a C++Builder Features Matrix document, one line of which says "UML Code Visualization – at any time, get a UML model view of your source code" and has a check in the "Professional" column of that table -- I assume this means it should be available to me. Yet, when I open an existing project and do a View | Model View, there's nothing in the Model View window. The only diagram I can find is on the Graph tab of the C++ Class Explorer. I wouldn't call that a UML diagram myself -- is that what Embarcadero is referring to? Embarcadero's table shows that many UML diagrams are not available in Pro, but it looks to me like Class Diagrams should be available. Other lines in that same table indicate that both "Full two-way class diagrams with synchronization between code and diagrams" and "Diagram hyper-linking and annotations" are also supposed to be available in Pro. The Class Explorer graph is one-way only as far as I can tell, so I hope they're referring to something else I haven't been able to find so far. Thanks for any insight into this.

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • How can I automatically elevate a COM interface used for automation?

    - by Jim Flood
    I have a Windows service built with ATL to expose a LocalServer32 COM interface for a set of admin commands used for configuring the service, and these can be used from VBScript for example: Set myObj = WScript.CreateObject("MySvc.Administrator") myObj.DoSomething() I want DoSomething to run elevated, and I would like the UAC prompt to come up automatically when this is called by the VBScript. Is this possible? I know I can run the script in an elevated command shell, and that I can use objShell.ShellExecute WScript.FullName, Chr(34) & WScript.ScriptFullName & Chr(34), vbNullString, "runas" for example, to run the VBScript itself elevated, and either of those work fine -- the COM method finds itself elevated. However, AFAIK getting an elevated Explorer window on the desktop is convoluted (it's not as simple as right-clicking Start/Accessories/Windows Explorer/Run as Administrator, which doesn't actually elevate.) I want a user in the local admin group to be able to drag-and-drop files and folders onto the script, and then have the script call the admin COM interface with those pathnames as arguments. (And I am hoping for something simpler than monkeying around with the args and using ShellExecute "runas".) I've tried setting UAC Execution Level to requireAdministrator in the service EXE's manifest, and setting Elevated/Enabled = 1 and LocalizedString in the registry for the MySvc.Administrator class, and these don't do the trick.

    Read the article

  • Gacutil.exe successfully adds assembly, but assembly missing from GAC. Why?

    - by Ben McCormack
    I'm running GacUtil.exe from within Visual Studio Command Prompt 2010 to register a dll (CatalogPromotion.dll) to the GAC. After running the utility, it says Assembly Successfully added to the cache, and running gacutil /l CatalogPromotionDll shows that the GAC contains the assembly, but I can't see the assembly when I navigate to C:\WINDOWS\assembly from Windows Explorer. Why can't I see the assembly in WINDOWS\assembly from Windows Explorer but I can see it using gacutil.exe? Background: Here's what I typed into the command prompt for VS Tools: C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /i CatalogPromotionDll.dll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Assembly successfully added to the cache C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /l CatalogPromotionDll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: CatalogPromotionDll, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9188a175 f199de4a, processorArchitecture=MSIL Number of items = 1 However, the assembly doesn't show up in C:\WINDOWS\assembly.

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • When downloading a file using FileStream, why does page error message refers to aspx page name, not

    - by StuperUser
    After building a filepath (path, below) in a string (I am aware of Path in System.IO, but am using someone else's code and do not have the opportunity to refactor it to use Path). I am using a FileStream to deliver the file to the user (see below): FileStream myStream = new FileStream(path, FileMode.Open, FileAccess.Read); long fileSize = myStream.Length; byte[] Buffer = new byte[(int)fileSize + 1]; myStream.Read(Buffer, 0, (int)myStream.Length); myStream.Close(); Response.ContentType = "application/csv"; Response.AddHeader("content-disposition", "attachment; filename=" + filename); Response.BinaryWrite(Buffer); Response.Flush(); Response.End(); I have seen from: http://stackoverflow.com/questions/736301/asp-net-how-to-stream-file-to-user reasons to avoid use of Response.End() and Response.Close(). I have also seen several articles about different ways to transmit files and have diagnosed and found a solution to the problem (https and http headers) with a colleague. However, the error message that was being displayed was not about access to the file at path, but the aspx file. Edit: Error message is: Internet Explorer cannot download MyPage.aspx from server.domain.tld Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. (page name and address anonymised) Why is this? Is it due to the contents of the file coming from the HTTP response .Flush() method rather than a file being accessed at its address?

    Read the article

  • javascript, php, cookies

    - by kennedy
    When i declare mac = 123, my internet explorer and firefox will keep refresh non-stop. And if i declare mac = getMacAddress it returns a value 1... I'm able to do a document.write(getMacAddress()) and it would able to display the mac address nicely. 1) Why my explorer will keep refreshing non-stop when i code it manually with "123" 2) why is the document.write able to display out, and when i store it to the cookie, somehow it didnt mange to capture into the cookie and it return a value of "1". Anyone help? create_users.php <script language="JavaScript"> function getMacAddress(){ document.macaddressapplet.setSep( "-" ); return (document.macaddressapplet.getMacAddress()); } function setCookie(c_name,value) { document.cookie = c_name + "=" +escape(value); } //error checking //var mac = getMacAddress(); var mac = "123"; setCookie('cookie_name',mac); window.location = "checkAvailability.php"; </script> checkAvailability.php $javascript_cookie_value = isset($_COOKIE["cookie_name"]) ? $_COOKIE["cookie_name"] : 1; mysql_query("INSERT INTO test (mac) VALUES ('$javascript_cookie_value')");

    Read the article

  • DOMAIN REDIRECT PROBLEM WITH JQUERY / JAVASCRIPT

    - by GiovanniDema
    Hi guys, first time here. I got a strange problem. I have a fullscreen image scaler javascript (as GOTOCHINA website) that works very well on my website. Then, I purchased a domain redirect pointing on my website and when redirecting suddenly internet explorer 7 and internet explorer 8 give me this error Messagge: is not a valid argument. Line: 34 Char: 17 URI: http://*****/scaler.js The script is var db=document.body; var imag=document.getElementById('wallpaper'); var dbsize={}; var imgsrc=imag.src; var keyStop=function(e){ var e=window.event||e||{}; var tag=e.target.tagName.toLowerCase(); if(tag!='textarea'&&!(tag=='input'&&(e.target.type=='text'||e.target.type=='password'))){ if(e.keyCode==32||e.keyCode==39||e.keyCode==40){ if(e.preventDefault)e.preventDefault(); else e.returnValue=false; } } } if(this.addEventListener)window.addEventListener('keydown',keyStop,false); else window.attachEvent('onkeydown',keyStop); setInterval(function(){ window.scrollTo(0,0); if(imag.complete){ if(db.clientWidth!=dbsize.w||db.clientHeight!=dbsize.h||imag.src!=imgsrc){ imgsrc=imag.src; var dbsizew=db.clientWidth; var dbsizeh=db.clientHeight; var newwidth=Math.round(dbsizeh*(imag.offsetWidth/imag.offsetHeight)); var nextvar=dbsizewnewwidth?dbsizew:newwidth; imag.style.width=nextvar+'px'; } } },300); In other words when i open the official website everything's working correctly. When i open redirect domain pointing on official website... the previous error appears. The line is exactly this - imag.style.width=nextvar+'px'; Thanks in advance Giovanni

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by Pavel
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). Apache "Keep Alive" option is set to 0

    Read the article

  • Silverlight 4, Google Chrome, and HttpWebRequest problem

    - by synergetic
    My Silvrlight 4 application hosted in ASP.NET MVC 2 working fine when used through Internet Explorer 8, both in development server and remote web server (IIS 6.0). However when I try to browse through Google Chrome (version 5.0.375.70) it throws "remote server returned not found" error. The code causing the problem is the following: public class MyWebClient { private HttpWebRequest _request; private Uri _uri; private AsyncOperation _asyncOp; public MyWebClient(Uri uri) { _uri = uri; } public void Start(XElement data) { _asyncOp = AsyncOperationManager.CreateOperation(null); _data = data; _request = (HttpWebRequest)WebRequest.Create(_uri); _request.Method = "POST"; _request.BeginGetRequestStream(new AsyncCallback(BeginRequest), null); } private void BeginRequest(IAsyncResult result) { Stream stream = _request.EndGetRequestStream(result); using (StreamWriter writer = new StreamWriter(stream)) { writer.Write(((XElement)_data).ToString()); } stream.Close(); _request.BeginGetResponse(new AsyncCallback(BeginResponse), null); } private void BeginResponse(IAsyncResult result) { HttpWebResponse response = (HttpWebResponse)_request.EndGetResponse(result); if (response != null) { //process returned data ... } } ... } In short, the above code sends some XML data to web server (to ASP.NET MVC controller) and gets back a processed data. It works when I use Internet Explorer 8. Can someone please explain what is the problem with Google Chrome?

    Read the article

  • dynamically insert new rows in the table (JavaScript) ?

    - by Karandeep Singh
    <script type="text/javascript" language="javascript"> function addNewRow() { var table = document.getElementById("table1"); var tr = table.insertRow(); var td = tr.insertCell(); td.innerHTML= "a"; td = tr.insertCell(); td.innerHTML= "b"; td = tr.insertCell(); td.innerHTML= "c"; td = tr.insertCell(); td.innerHTML= "d"; td = tr.insertCell(); td.innerHTML= "e"; } </script> <body> <table id="table1" border="1" cellpadding="0" cellspacing="0" width="100%"> <tr id="row1"> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> </tr> </table> <input type="button" onClick="addNewRow()" value="Add New"/> </body> This example is for dynamically insert new row and cells in the table. But its behavior is different in all browsers. Internet Explorer = It add row in the last and new added cells are starts from first. Chrome/Safari = It add new row in the first and new added cells are starts from end. Mozilla Firefox = It is not working. Sir, I want new added row in the last and new added cells starts from first like(Interner Explorer) in all browsers. If you have any solution for same behavior please tell me. Thanks,

    Read the article

  • FileSystemWatcher Work is Done?

    - by Snowy
    I setup a FsWatcher on a local filesystem directory. I only want to know when files are added to the directory so they can be moved to another filesystem. I seem to be able to detect when the first file is in, but actually I want to know when all files from a given copy operation are done. If I used Windows Explorer to copy files from one directory to another, Explorer would tell me that there are n seconds left in the transfer, so while there is some activity for the begin-transfer and end-transfer for each file, it appears that there is something for the begin-transfer and end-transfer for all files. I wonder if there is something similar that I can do just with the .NET Framework. I would like to know when "all" files are in and not just a single file in a "transaction". If there is nothing baked in, maybe I should come up with some kind of waiting/countering in order to only do my activity when a job is "done". Not sure if I'm making 100% sense on this one, please anyone comment. Thanks.

    Read the article

  • Browse for folder can't see camera device

    - by Robert Frank
    In Delphi 2010, I want to allow users to browse and select a folder. The folder is on a device (?) created by a DSLR: The folder is visible in the Windows Explorer as shown above. And, the folder is visible in a TOpenDialog, allowing them to browse into the folder and choose a file. Unfortunately, I have been unable to get either SHBrowseForFolder (code I found on the web but don't understand) or SelectDirectory to see the camera device or folder beneath it. (Side note: IMO, SelectDirectory is a far nicer UI, since the user can see the files in the folders while browsing.) I assume this has to do with the fact that the folder is in a device (?) created by the camera software. I've seen some tricks where you call TOpenDialog to browse for folders with '*.' and then ExtractFileDir on the result, but that's not robust or, IMO, a good UI. What I'm looking for is a "Browse for folder" that can see the same devices (including the camera device) the TOpenDialog & Windows Explorer can see. (Ideally, it would have the nice appearance like the one below!) Any suggestions? Image of a MS-Word's folder browsing in Win7. (I wonder if it looks this pretty in XP.)

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >