Search Results

Search found 9688 results on 388 pages for 'external hdd'.

Page 165/388 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • Google Analytics recording event based on <a> title attribute

    - by rlsaj
    I am declaring: var title = (typeof(el.attr('title')) != 'undefined' ) ? el.attr('title') :""; and then have the following: else if (title.match(/^"Matching Content"\:/i)) { elEv.category = "Matching Content Click"; elEv.action = "click-Matching-Content"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } However, using Google Analytics debugger this is not being recorded. Any suggestions? The complete function is: if (typeof jQuery != 'undefined') { jQuery(document).ready(function gLinkTracking($) { var filetypes = /\.(avi|csv|dat|dmg|doc.*|exe|flv|gif|jpg|mov|mp3|mp4|msi|pdf|png|ppt.*|rar|swf|txt|wav|wma|wmv|xls.*|zip)$/i; var baseHref = ''; if (jQuery('base').attr('href') != undefined) baseHref = jQuery('base').attr('href'); jQuery('a').on('click', function (event) { var el = jQuery(this); var track = true; var href = (typeof(el.attr('href')) != 'undefined' ) ? el.attr('href') :""; var title = (typeof(el.attr('title')) != 'undefined' ) ? el.attr('title') :""; var isThisDomain = href.match(document.domain.split('.').reverse()[1] + '.' + document.domain.split('.').reverse()[0]); if (!href.match(/^javascript:/i)) { var elEv = []; elEv.value=0, elEv.non_i=false; if (href.match(/^mailto\:/i)) { elEv.category = "Email link"; elEv.action = "click-email"; elEv.label = href.replace(/^mailto\:/i, ''); elEv.loc = href; } else if (title.match(/^"Matching Content"\:/i)) { elEv.category = "Matching Content Click"; elEv.action = "click-Matching-Content"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } else if (href.match(filetypes)) { var extension = (/[.]/.exec(href)) ? /[^.]+$/.exec(href) : undefined; elEv.category = "File Downloaded"; elEv.action = "click-" + extension[0]; elEv.label = href.replace(/ /g,"-"); elEv.loc = baseHref + href; } else if (href.match(/^https?\:/i) && !isThisDomain) { elEv.category = "External link"; elEv.action = "click-external"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } else if (href.match(/^tel\:/i)) { elEv.category = "Telephone link"; elEv.action = "click-telephone"; elEv.label = href.replace(/^tel\:/i, ''); elEv.loc = href; } else track = false; if (track) { _gaq.push(['_trackEvent', elEv.category.toLowerCase(), elEv.action.toLowerCase(), elEv.label.toLowerCase(), elEv.value, elEv.non_i]); if ( el.attr('target') == undefined || el.attr('target').toLowerCase() != '_blank') { setTimeout(function() { location.href = elEv.loc; }, 400); return false; } } } }); }); }

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Maven error: Unable to get resource / Server redirected too many times

    - by tewe
    Our proxy went down and I tried to update dependencies with maven while it was off. Since then I can't download anything with maven. I get this error for everything. I tried -U option, deleting my local repository and tried different maven version (2.0.9, 2.2.1) but it doesn't work. Any idea how to solve this? Earlier it also said 'repository will be blacklisted' to all of them. Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-compiler-plugin/2.1/maven-compiler-plugin-2.1.pom [WARNING] Unable to get resource 'org.apache.maven.plugins:maven-compiler-plugin:pom:2.1' from repository central (http://repo1.maven.org/maven2): Error transferring file: Server redirected too many times (20) org.apache.maven.plugins:maven-compiler-plugin:pom:2.1 from the specified remote repositories: jboss-snapshot (http://snapshots.jboss.org/maven2), central (http://repo1.maven.org/maven2), JBoss Repo (http://repository.jboss.com/maven2), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.com/maven/bundles/external), com.springsource.repository.bundles.snapshot (http://repository.springsource.com/maven/bundles/snapshot), jboss (http://repository.jboss.com/maven2), com.springsource.repository.bundles.release (http://repository.springsource.com/maven/bundles/release), jboss-snapshot-plugins (http://snapshots.jboss.org/maven2), com.springsource.repository.bundles.milestone (http://repository.springsource.com/maven/bundles/milestone), jboss-plugins (http://repository.jboss.com/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 25 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 27 more

    Read the article

  • TeamCity stopped working once I added NUnit to the mix

    - by Dave
    I'm struggling a lot trying to get our build server going. I am currently running tests in a Windows XP virtual machine, and have installed TeamCity v5.0.3, build 10821. I am using NUnit v2.5.3. I finished the initial setup with TeamCity without any issues at all, provided that I use the sln2008 build runner that makes the entire process almost brainless. It's really quite nice that way, and very satisfying to see your first successful automated build. Now it's time to kick it up a notch and I wanted to get NUnit working. I keep the NUnit 2.5.3 assemblies in an external libs folder in SVN, so I checked that out onto the test system. I selected NUnit 2.5.3 from the build runner options, as the online instructions had recommended. But when I build, I get the following error: Window1.xaml.cs(14,7): error CS0246: The type or namespace name ‘NUnit’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘Test’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘TestAttribute’ could not be found (are you missing a using directive or an assembly reference?) Everything compiles great in the IDE. From finding blog posts and submitting comments, I got some advice and confirmed the following: I have the HintPath value set properly in my project file (points to the external lib) I can also do a full Release and Debug build from the command line using msbuild I have tried do use the NUnit installer so nunit.framework.dll gets registered into the GAC I have changed the build agent's logon account to be a user on the test system, rather than LOCAL SYSTEM. Nothing seems to help... can anyone else here offer me some advice on what to try next?

    Read the article

  • Selectively suppress XML Code Comments in C#?

    - by Mike Post
    We deliver a number of assemblies to external customers, but not all of the public APIs are officially supported. For example, due to less than optimal design choices sometimes a type must be publicly exposed from an assembly for the rest of our code to work, but we don't want customers to use that type. One part of communicating the lack of support is not provide any intellisense in the form of XML comments. Is there a way to selectively suppress XML comments? I'm looking for something other than ignoring warning 1591 since it's a long term maintenance issue. Example: I have an assembly with public classes A and B. A is officially supported and should have XML documentation. B is not intended for external use and should not be documented. I could turn on XML documentation and then suppress warning 1591. But when I later add the officially supported class C, I want the compiler to tell me that I've screwed up and failed to add the XML documentation. This wouldn't occur if I had suppressed 1591 at the project level. I suppose I could #pragma across entire classes, but it seems like there should be a better way to do this.

    Read the article

  • Using Crypt function Python 3.3.2

    - by adampski
    In Windows and Python version 3.3.2, I try and call the python module like so: hash2 = crypt(word, salt) I import it at the top of my program like so: from crypt import * The result I get is the following: Traceback (most recent call last): File "C:\none\of\your\business\adams.py", line 10, in <module> from crypt import * File "C:\Python33\lib\crypt.py", line 3, in <module> import _crypt ImportError: No module named '_crypt' However, when I execute the same file adams.py in Ubuntu, with Python 2.7.3, it executes perfectly - no errors. I tried the following to resolve the issue for my Windows & Python 3.3.2 (though I'm sure the OS isn't the issue, the Python version or my use of syntax is the issue): Rename the directory in the Python33 directory from Lib to lib Rename the crypt.py in lib to _crypt.py. However, it turns out the entire crypt.py module depends on an external module called _crypt.py too. Browsed internet to download anything remotely appropriate to resemble _crypt.py It's not Python, right? It's me...(?) I'm using syntaxes to import and use external modules that are acceptable in 2.7.3, but not in 3.3.2. Or have I found a bug in 3.3.2?

    Read the article

  • 407 Proxy Authentication Required

    - by Hemant Kothiyal
    I am working on a website, in which I am retrieving XML data from an external URL, using the following code WebRequest req = WebRequest.Create("External server url"); req.Proxy = new System.Net.WebProxy("proxyUrl:8080", true); req.Proxy.Credentials = CredentialCache.DefaultCredentials; WebResponse resp = req.GetResponse(); StreamReader textReader = new StreamReader(resp.GetResponseStream()); XmlTextReader xmlReader = new XmlTextReader(textReader); XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(xmlReader); This code is working fine on my development PC (Windows XP with .Net 3.5) But when I deploy this code to IIS (Both at Windows XP and at Windows Server 2003) it's giving me following error "The remote server returned an error: (407) Proxy Authentication Required." Sometimes it gives me "The remote server returned an error: (502) Bad Gateway." Following code is from my web.config <system.net> <defaultProxy> <proxy usesystemdefault="False" proxyaddress ="http://172.16.12.12:8080" bypassonlocal ="True" /> </defaultProxy> </system.net> Please help me ? [Edit] Even when i run the website for devlopment PC but through IIS it gives me error "The remote server returned an error: (407) Proxy Authentication Required." But when i run website from Microsoft Devlopment server, it is running fine

    Read the article

  • SIGABRT error when running on iPad

    - by user324881
    Hello, all. I've been banging my head for a few hours because of this problem. I have a universal project that's a mix of iPhone and iPad projects. I put these codebases together into the universal project and, after a lot of "#if __IPHONE_OS_VERSION_MIN_REQUIRED >= 30200" checks, got the project to run in both the iPhone (OS 3.0 to 3.1.3) and iPad simulators. After doing a more finagling with the project settings of external libraries that I load, I got the app to load on an iPhone (which runs OS 3.1.3). However, when I run the app on my iPad, I get an immediate SIGABRT error. I've tried running it under Debug, under Release, with Active Architecture of both armv6 and armv7. I've checked and double-checked that the app has the right nib files set up (but, again, this app runs fine in the simulator). I've gone through the external libraries I'm using and set them up to have the same base SDK (3.2), same architectures (Optimized (armv6 armv7)), the same targeted device family (iPhone/iPad), and the same iPhone OS deployment target (iPhone OS 3.0). So, to summarize... I have a universal app that works in the simulator for iPhone and iPad, runs on an actual iPhone, but doesn't run on an iPad. It doesn't get far on the iPad -- there's an immediate SIGABRT error that stops execution. Help??

    Read the article

  • Need content in UIWebView to display quickly

    - by leftspin
    Part of my app caches web pages for offline viewing. To do that, I am saving the HTML fetched from a site and rewriting img urls to point to a file on the local store. When I load the html into a UIWebView, it loads the images as expected and everything's fine. I am also caching stylesheets in this fashion. The problem is that when I put the phone into airplane mode, loading this cached html causes the UIWebView to display a blank screen and pause for a while before displaying the page. I've figured out that it's caused by non-cached URLs referenced from the original HTML doc that the web view is trying to fetch. These other URLs include images within the cached stylesheets, content in iframes, and javascript that opens a connection to fetch other resources. The pause happens when the UIWebView tries to fetch these resources, and the web page only appears after all these other fetches have timed out. My questions is, how can I make UIWebView just display the stuff I've cached immediately? Here are my thoughts: write even more code to cache these other references. This is potentially a ton more code to catch all the edge cases, etc., especially having to parse the Javascript to see what it loads after the page is loaded force UIWebView to time out immediately so there's no pause. I haven't figured out how to do this. somehow get what's already loaded to display even though the external references haven't finished fetching yet strip the code of all scripts, link tags and iframes to "erase" the external references. I've tried this one, but for some sites, the resultant page is severely messed up Can anyone help me here? I've been working on this forever, and am running out of ideas.

    Read the article

  • How to do proper Unicode and ANSI output redirection on cmd.exe?

    - by Sorin Sbarnea
    If you are doing automation on windows and you are redirecting the output of different commands (internal cmd.exe or external, you'll discover that your log files contains combined Unicode and ANSI output (meaning that they are invalid and will not load well in viewers/editors). Is it is possible to make cmd.exe work with UTF-8? This question is not about display, s about stdin/stdout/stderr redirection and Unicode. I am looking for a solution that would allow you to: redirect the output of the internal commands to a file using UTF-8 redirect output of external commands supporting Unicode to the files but encoded as UTF-8. If it is impossible to obtain this kind of consistence using batch files, is there another way of solving this problem, like using python scripting for this? In this case, I would like to know if it is possible to do the Unicode detection alone (user using the scripting should not remember if the called tools will output Unicode or not, it will just expect to convert the output to UTF-8. For simplicity we'll assume that if the tool output is not-Unicode it will be considered as UTF-8 (no codepage conversion).

    Read the article

  • Unable to debug XBAP with Visual Studio 2010

    - by Oleg I.
    Just migrated my project to Visual Studio 2010, but target framework was left 3.5. Project contains an XBAP app in partial trust and a bunch of WCF services. Debugging is configured to start PresentationHost.exe with -debug and -debugSecurityZoneUrl parameters. Under VS2008 everything works fine, and in VS2010 Beta2 (don't sure about RC), but under VS2010 RTM debugging is for some reason doesn't working. Application runs, but doesn't hit any breakpoint. And if for example exception occurs, message box appears "Do you wish to debug or close..." and after I choose "debug" option new weird message box appears: --------------------------- Warning --------------------------- A debugger is attached to PresentationHost.exe but not configured to debug this unhandled exception. To debug this exception, detach the current debugger. An unhandled exception was raised from Microsoft .NET Framework v 1.0, 1.1, or 2.0, but the current debugger is configured to debug Microsoft .NET Framework v4.0 code. Examine the exception using the SOS tool. --------------------------- OK --------------------------- And where is the vaunted multitargeting? Did anyone have already bumped into same issue? UPDATE: Tried to debug with "Start browser with URL" option. Debugging is working, but I get SecurityException. So it is possible, just need to figure out how to make it work with "Start external program" option. UPDATE2: Checked what PresentationHost is actually loads in both scenarios: "Start external program" - Latest version (4.0.31106.0) from C:\Windows\System32\ "Start browser with URL" - Old version (3.0.6920.4902) from C:\Windows\winsxs\x86_wpf-presentationhostexe_31bf3856ad364e35_6.1.7600.16385_none_6fca8974817173aa

    Read the article

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • Bash scripting problem

    - by komidore64
    I'm writing a bash script to sync my iTunes music directory to a directory on a removable hard drive. The script works fine when there is absolutely nothing in the folder on the external hard drive. Once all files have been copied to the external drive, then the script begins to act strange. Even though i just sync'd everything over, it proceeds to recopy certain files again. After the initial sync, it chooses the same files to resync each consecutive time the script is executed without any changes being made to the source directory. #!/bin/bash # shell script to sync music with gigabeat and/or firewire drive musicdir="/Users/komidore64/Music/iTunes/iTunes Media/Music" gigadir="/Volumes/GIGABEAT/music" # fwdir="/Volumes/" remove() { find "$1" \ ! \( -name "*.wav" \ -o -name "*.ogg" \ -o -name "*.flac" \ -o -name "*.aac" \ -o -name "*.mp3" \ -o -name "*.m4a" \ -o -name "*.wma" \ -o -name "*.m4p" \ -o -name "*.ape" \ -o -type d \) \ -exec rm -i {} \; } if [ $# == 0 ]; then echo "no device argument present" echo "specify '-g' for gigabeat" echo "or '-f' for firewire drive" else remove "$musicdir" while [ $1 ]; do case $1 in -g | --gigabeat ) rsync --archive --verbose --delete "$musicdir/" "$gigadir" ;; -f | --firewire ) rsync --archive --verbose --delete "$musicdir/" "$fwdir" esac shift done echo "music synced" fi

    Read the article

  • How Do I Use jQuery/JavaScript To Open A Popup Window/Tab (ASPX Login Page) & Then Pass Values To Op

    - by Terry Robinson
    Hi All, We currently have two asp.net 2.x web applications and we need to perform the following functionality: From one application, we want to auto-login to the other web application automatically in a new tab; using the same browser instance/window. So the process is: Open New Window/Tab With Second System URL/Login Page Wait For Popup Window/Tab Page To Load (DOM Ready?) OnPopupDomReady { Get Usename, Password, PIN Controls (jQuery Selectors) and Populate In Code Then Click Login Button (All Programatically). } I am currently using JavaScript to Open the window as follows: <script type="text/javascript"> $(document).ready(function () { $('a[rel="external"]').click(function () { window.open($(this).attr('href')); return false; }); }); </script> I would like to use jQuery chaining functionality if possible to extent the method above so that I can attach a DOM Ready event to the popped up page and then use that event to call a method on the code behind of the popped up page to automatically login. Something similar to this (Note: The Following Code Sample Does Not Work, It Is Here To Try And Help Illustrate What We Are Trying To Achieve)... <script type="text/javascript"> $(document).ready(function () { $('a[rel="external"]').click(function () { window.open($(this).attr('href').ready(function () { // Use JavaScript (Pref. jQuery Partial Control Name Selectors) To Populate Username/Password TextBoxes & Click Login Button. }) }); }); </script> Our Architecture Is As Follows: We have the source for both products (ASP.NET WebSite[s]) and they are run under different app. pools in IIS. I hope this all makes sense, and if my plan is not going to work, please provide hints ;) Thanks All/Kind Regards, Terry Robinson.

    Read the article

  • Storing data on SD Card in Android

    - by BBoom
    Using the guide at Android Developers (http://developer.android.com/guide/topics/data/data-storage.html) I've tried to store some data to the SD-Card. This is my code: // Path to write files to String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/Android/data/"+ctxt.getString(R.string.package_name)+"/files/"; String fname = "mytest.txt"; // Current state of the external media String extState = Environment.getExternalStorageState(); // External media can be written onto if (extState.equals(Environment.MEDIA_MOUNTED)) { try { // Make sure the path exists boolean exists = (new File(path)).exists(); if (!exists){ new File(path).mkdirs(); } // Open output stream FileOutputStream fOut = new FileOutputStream(path + fname); fOut.write("Test".getBytes()); // Close output stream fOut.flush(); fOut.close(); } catch (IOException ioe) { ioe.printStackTrace(); } When I create the new FileOutputStream I get a FileNotFound exception. I have also noticed that "mkdirs()" does not seem to create the directory. Can anyone tell me what I'm doing wrong? I'm testing on an AVD with a 2GB sd card and "hw.sdCard: yes", the File Explorer of DDMS in Eclipse tells me that the only directory on the sdcard is "LOST.DIR".

    Read the article

  • xcopy failing within TFSbuild

    - by mattgcon
    I am using TFS2008 and within my TFSBuild.proj file I have a target that call xcopy to copy the build to the production website location for automation. However I am receiving the following error when running the build: Task "Exec" Command: xcopy "\\test\TFSBuilds\Online System V2 Build to NETPUB_20100430.2\Debug\_PublishedWebsites\IPAMIntranet" " C:\\Inetpub\wwwroot\IPAMOnlineSystem\IPAMIntranet\IPAMIntranet " /E Parse Error 'C:\\Inetpub\wwwroot\IPAMOnlineSystem\IPAMIntranet\IPAMIntranet' is not recognized as an internal or external command, operable program or batch file. '" /E ' is not recognized as an internal or external command, operable program or batch file. The following is my code line for the xcopy: <Target Name="AfterDropBuild"> <Exec Command="xcopy &quot;$(DropLocation)\$(BuildNumber)\Debug\_PublishedWebsites\IPAMIntranet&quot; &quot;$(RemoteDeploySitePath)&quot; /E " /> </Target> I have even tried single quotes around the file locations and actual double quotes insteand of the " symbols. Why is this happening, can anyone decipher this for me and help me correct this.

    Read the article

  • Maven jetty download dependencies

    - by portoalet
    Hi, Why does every time I do "mvn jetty:run", maven tries to download some dependencies (apache poi and ojdbc jars) ? How can I disable this? [INFO] Scanning for projects.. [INFO] Searching repository for plugin with prefix: 'jetty'. [INFO] ------------------------------------------------------------------------ [INFO] Building infolitReport [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run Downloading: http://repository.springsource.com/maven/bundles/release/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/external/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/milestone/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/snapshot/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repo1.maven.org/maven2/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/release/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/external/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/milestone/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/snapshot/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repo1.maven.org/maven2/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom [INFO] [aspectj:compile {execution: default}]

    Read the article

  • Fast screen capture and lost Vsync

    - by user338759
    Hi, I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application. I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application. When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app. The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX) FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it. Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app. The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync. Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7) My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine? Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ? Any other suggestion, technology ? thank you

    Read the article

  • Facebook XFBML IFrame App

    - by user329379
    Hi Guys, I started creating an application utilizing FBML but quickly realized its limits as far as external sources like javascript and so forth. I had just finished the app and now I am looking to convert it to XFBML so I can use external javascripts. I am having an issue with the login function. It allows a user to login when I goto my site but if i go to the facebook app page my button to login still appears in the frame (even though I am already logged into facebook). Upon clicking login the app fucntions as expected. My question is why do i have to click login even though I am already logged in? On the actual website it processes the app as if I am already logged in. Below is some of my code... $FB_APP_URL = 'http://apps.facebook.com/XXXXXX'; $facebook = new Facebook($api_key, $secret); $facebook-api_client-setFormat('json'); $user = $facebook-get_loggedin_user(); if (!$user) { $facebook->redirect($facebook->get_login_url($FB_APP_URL, 1)); } if (!$facebook-in_frame()) { $facebook-redirect($FB_APP_URL, 1); }

    Read the article

  • emacs tramp performance

    - by Oleg Pavliv
    Is there a way to improve emacs tramp performance? For me it's faster to open an external ftp client (filezilla), transfer files to the local disk and open them in an external editor (notepad) than open them with emacs. I use emacs23.1 under windows xp. I tried different tramp-default-method (telnet, pscp, ftp), all of them have the same performance. Profiling results with elp-instrument-package are the following (I opened 3 remote files of 1.5 MB each one) tramp-file-name-handler 1461 350.41599999 0.2398466803 tramp-sh-file-name-handler 1461 350.02699999 0.2395804243 tramp-send-command 227 179.63400000 0.7913392070 tramp-send-command-and-check 205 177.77600000 0.8672000000 tramp-wait-for-regexp 227 176.47800000 0.7774361233 tramp-wait-for-output 226 176.40000000 0.7805309734 tramp-barf-unless-okay 18 133.46699999 7.4148333333 tramp-handle-insert-file-contents 3 132.046 44.015333333 tramp-handle-file-local-copy 3 131.281 43.760333333 tramp-accept-process-output 2375 112.95100000 0.0475583157 So, actual file transfer takes 132 sec, about 1/3 of total time. Why does it spend so much time in tramp-sh-file-name-handler? I tried to advice a function tramp-sh-file-name-handler to store and return cached results but it does not work, probably this function has some side effects. Any ideas how to improve tramp performance? (I use emacs 23.1 under WindowsXP)

    Read the article

  • Error Building Gem

    - by Joel M.
    I tried to install the following gem: http://github.com/maxjustus/sinatra-authentication on Windows 7 running Ruby 1.9 from the One-Click Installer. I got the following error: Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\Joel>gem install sinatra-authentication Building native extensions. This could take a while... ERROR: Error installing sinatra-authentication: ERROR: Failed to build gem native extension. C:/Ruby19/bin/ruby.exe extconf.rb creating Makefile make 'make' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in C:/Ruby19/lib/ruby/gems/1.9.1/gems/yajl-ruby- 0.7.5 for inspection. Results logged to C:/Ruby19/lib/ruby/gems/1.9.1/gems/yajl-ruby-0.7.5/ext/gem_mak e.out I looked everywehere online, tried to install earlier versions, and attempted a manual install without success (it gave me a stack too deep error). I suspect there are problems with the yajl-ruby gem (http://github.com/brianmario/yajl-ruby), a dependency? (I think) The logs in gem_make.out show: C:/Ruby19/bin/ruby.exe extconf.rb creating Makefile make 'make' is not recognized as an internal or external command, operable program or batch file. Do you have any idea as to how to solve this? Thanks!

    Read the article

  • C# Static constructors design problem - need to specify parameter

    - by Neil Dobson
    I have a re-occurring design problem with certain classes which require one-off initialization with a parameter such as the name of an external resource such as a config file. For example, I have a corelib project which provides application-wide logging, configuration and general helper methods. This object could use a static constructor to initialize itself but it need access to a config file which it can't find itself. I can see a couple of solutions, but both of these don't seem quite right: 1) Use a constructor with a parameter. But then each object which requires corelib functionality should also know the name of the config file, so this has to be passed around the application. Also if I implemented corelib as a singleton I would also have to pass the config file as a parameter to the GetInstance method, which I believe is also not right. 2) Create a static property or method to pass through the config file or other external parameter. I have sort of used the latter method and created a Load method which initializes an inner class which it passes through the config file in the constructor. Then this inner class is exposed through a public property MyCoreLib. public static class CoreLib { private static MyCoreLib myCoreLib; public static void Load(string configFile) { myCoreLib = new MyCoreLib(configFile); } public static MyCoreLib MyCoreLib { get { return myCoreLib; } } public class MyCoreLib { private string configFile; public MyCoreLib(string configFile) { this.configFile = configFile; } public void DoSomething() { } } } I'm still not happy though. The inner class is not initialized until you call the load method, so that needs to be considered anywhere the MyCoreLib is accessed. Also there is nothing to stop someone calling the load method again. Any other patterns or ideas how to accomplish this?

    Read the article

  • Retrieve parent node from selection (range) in Gecko and Webkit

    - by Jason
    I am trying to add an attribute when using a wysiwyg editor that uses "createLink" command. I thought it would be trivial to get back the node that is created after the browse executes that command. Turns out, I am only able to grab this newly created node in IE. Any ideas? The following code demonstrates the issue (debug logs at bottom show different output in each browser): var getSelectedHTML = function() { if ($.browser.msie) { return this.getRange().htmlText; } else { var elem = this.getRange().cloneContents(); return $("<p/>").append($(elem)).html(); } }; var getSelection = function() { if ($.browser.msie) { return this.editor.selection; } else { return this.iframe[0].contentDocument.defaultView.getSelection(); } }; var getRange = function() { var s = this.getSelection(); return (s.getRangeAt) ? s.getRangeAt(0) : s.createRange(); }; var getSelectedNode = function() { var range = this.getRange(); var parent = range.commonAncestorContainer ? range.commonAncestorContainer : range.parentElement ? range.parentElement(): range.item(0); return parent; }; // **** INSIDE SOME EVENT HANDLER **** if ($.browser.msie) { this.ec("createLink", true); } else { this.ec("createLink", false, prompt("Link URL:", "http://")); } var linkNode = $(this.getSelectedNode()); linkNode.attr("rel", "external"); $.log(linkNode.get(0).tagName); // Gecko: "body" // IE: "a" // Webkit: "undefined" $.log(this.getSelectedHTML()); // Gecko: "<a href="http://site.com">foo</a>" // IE: "<A href="http://site.com" rel=external>foo</A>" // Webkit: "foo" $.log(this.getSelection()); // Gecko: "foo" // IE: [object Selection] // Webkit: "foo" Thanks for any help on this, I've scoured related questions on SO with no success!

    Read the article

  • Spell checker software

    - by Naren
    Hello Guys, I have been assigned a task to find a decent spell checker (UK English) preferably the free one for a project that we are doing. I have looked at Google AJAX API for this. The project contains some young person's (kids less than 18 years old) data which shouldn't allow exposing or storing outside the application boundaries. Google logs the data for research purpose that means Google owns the data whatever we send over the wire through Google API. Is this right? I fired an email to Google regarding the privacy of data and storage but they haven't come back. If you have some knowledge regarding this please share with me. At this point our servers might not have access to external entities that means we might not be able to use Web API for this over the wire. But it may change in the future. That means I have to find out some spell checker alternatives that can sit in our environment and do the job or an external APIs. Would you mind share your findings and knowledge in this regard. I would prefer free services but never know if you have some cracking spell checker for a few quid’s then I don't mind recommending to the project board. Technology using ASP.NET 3.5/4.0, MVC, jQuery, SQL Sever 2008 etc Cheers, Naren

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >